modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-23 18:27:52
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
492 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-23 18:25:26
card
stringlengths
11
1.01M
uripper/AVA
uripper
2023-07-13T08:15:52Z
10
0
transformers
[ "transformers", "pytorch", "safetensors", "gpt2", "text-generation", "license:cc", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-08-22T20:54:37Z
--- license: cc widget: - text: "Movie: Parasite Score:" example_title: "Parasite" - text: "Movie: Come and See Score:" example_title: "Come and See" - text: "Movie: Harakiri Score:" example_title: "Harakiri" --- # Review Training Bot This model was trained for the purpose of generating scores and reviews for any given movie. It is fine-tuned on distilgpt2 as a baseline and trained on a custom dataset created by scraping around 120k letterboxd reviews. The current state of the model can get the correct formatting reliably but oftentimes is prone to gibberish. Further training will hopefully add coherency. It is in version 0.1 currently. ## Intended uses & limitations This model is intended to be used for entertainment. Limitations for this model will be much of the same as distilgpt2 which can be viewed here https://huggingface.co/distilgpt2. These may include persistent biases. Another issue may be through language specifically on letterboxd that the algorithm may not be able to understand. i.e. an LGBT+ film on letterboxd may have multiple reviews that mention the word "gay" positively, this model has not been able to understand this contextual usage and will use the word as a slur. As the current model also struggles to find a connection between movie titles and the reviews, this could happen with any entered movie. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 10 - eval_batch_size: 20 - seed: 42 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - training_steps: 5000 ### Framework versions - Transformers 4.21.2 - Pytorch 1.12.1+cu113 - Tokenizers 0.12.1
Ablustrund/moss-rlhf-reward-model-7B-zh
Ablustrund
2023-07-13T08:10:42Z
3
23
null
[ "llm", "reward model", "moss", "rlhf", "zh", "arxiv:2307.04964", "license:agpl-3.0", "region:us" ]
null
2023-07-12T02:27:02Z
--- license: agpl-3.0 language: - zh tags: - llm - reward model - moss - rlhf --- # MOSS-RLHF ### *MOSS-RLHF & "Secrets of RLHF in Large Language Models Part I: PPO" <br>👉 <a href="https://arxiv.org/abs/2307.04964" target="_blank">[Technical report]</a> <a href="https://openlmlab.github.io/MOSS-RLHF/" target="_blank">[Home page]* ## 🌟 News ### 👉 Wed, 12. July 2023. We have released Chinese reward model based OpenChineseLlama-7B! [moss-rlhf-reward-model-7B-zh](https://huggingface.co/Ablustrund/moss-rlhf-reward-model-7B-zh/tree/main) <br> ### 👉 Thu, 13. July 2023. We have released English reward model and SFT model based Llama-7B! [moss-rlhf-reward-model-7B-en](https://huggingface.co/fnlp/moss-rlhf-reward-model-7B-en) [moss-rlhf-sft-model-7B-en](https://huggingface.co/fnlp/moss-rlhf-sft-model-7B-en) <br> ## 🧾 Open-source List - [x] Open source code for RL training in large language models. - [x] A 7B Chinese reward model based on openChineseLlama. - [x] A 7B English reward model based on Llama-7B. - [x] SFT model for English. - [ ] Policy model for English after RLHF. - ... ## 🌠 Introduction Due to the challenges of reward design, environment interaction, and agent training, coupled with huge trial and error cost of large language models, there is a significant barrier for AI researchers to motivate the development of technical alignment and safe landing of LLMs. The stable training of RLHF has still been a puzzle. In this technical report, we intend to help researchers to train their models stably with human feedback. Contributions are summarized as follows: 1) We release competitive Chinese and English reward models, respectively, which have good cross-model generalization ability, alleviating the cost of relabeling human preference data; 2) We conduct in-depth analysis on the inner workings of PPO algorithm and propose the PPO-max algorithm to ensure stable model training; 3) We release the complete PPO-max codes to ensure that the LLMs in the current SFT stage can be better aligned with humans. ## 🔩 Requirements & Setup This repository works on Python 3.8 and PyTorch 1.13.1. We recommend using the **conda** virtual environment to run the code. #### Step 1: Create a new Python virtual environment ```bash conda update conda -n base -c defaults conda create -n rlhf python=3.8 conda activate rlhf ``` #### Step 2: Install PyTorch and TensorBoard ```bash conda install pytorch==1.13.1 pytorch-cuda=11.7 tensorboard -c pytorch -c nvidia ``` #### Step 3: Install the remaining dependencies ```bash conda install datasets accelerate safetensors chardet cchardet -c huggingface -c conda-forge pip3 install transformers sentencepiece einops triton==1.0.0 rouge jionlp==1.4.14 nltk sacrebleu cpm_kernels apt install libaio-dev DS_BUILD_OPS=1 pip install deepspeed ``` ## ✨ Start training your own model! Run code in a few steps. ### Step 1: Recover Reward model weights We can not directly release the full weight of the reward model because of protocol restrictions. You can merge the diff weight with original Llama-7B to recover the reward model we used. We upload the diff models, thanks to tatsu-lab, you can recover the reward model follow these steps: ```bash 1) Download the weight diff into your local machine. The weight diff is located at: # For English: TODO # For Chinese: https://huggingface.co/Ablustrund/moss-rlhf-reward-model-7B-zh/tree/main 2) Merge the weight diff with the original Llama-7B: # For English: # Reward model python merge_weight_en.py recover --path_raw decapoda-research/llama-7b-hf --path_diff ./models/moss-rlhf-reward-model-7B-en/diff --path_tuned ./models/moss-rlhf-reward-model-7B-en/recover --model_type reward # SFT model python merge_weight_en.py recover --path_raw decapoda-research/llama-7b-hf --path_diff ./models/moss-rlhf-sft-model-7B-en/diff --path_tuned ./models/moss-rlhf-sft-model-7B-en/recover --model_type sft # Policy model TODO # For Chinese: python merge_weight_zh.py recover --path_raw decapoda-research/llama-7b-hf --path_diff ./models/moss-rlhf-reward-model-7B-zh/diff --path_tuned ./models/moss-rlhf-reward-model-7B-zh/recover ``` ### Step 2: Select your own SFT model. Because of some limitations, we can not release the **Chinese** SFT model (Currently). You can use your own SFT model, or a strong base model instead of our SFT model. ### Step 3: Start training Run the command below. ``` # For Chinese: # You need to use your own sft model currently. bash run_zh.sh # For English: # We have loaded the sft model and reward model to huggingface. bash run_en.sh ``` ## Citation ```bibtex @article{zheng2023secrets, title={Secrets of RLHF in Large Language Models Part I: PPO}, author={Rui Zheng and Shihan Dou and Songyang Gao and Wei Shen and Binghai Wang and Yan Liu and Senjie Jin and Qin Liu and Limao Xiong and Lu Chen and Zhiheng Xi and Yuhao Zhou and Nuo Xu and Wenbin Lai and Minghao Zhu and Rongxiang Weng and Wensen Cheng and Cheng Chang and Zhangyue Yin and Yuan Hua and Haoran Huang and Tianxiang Sun and Hang Yan and Tao Gui and Qi Zhang and Xipeng Qiu and Xuanjing Huang}, year={2023}, eprint={2307.04964}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
veluchs/dqn-SpaceInvadersNoFrameskip-v4-newest
veluchs
2023-07-13T08:04:45Z
4
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-13T08:01:41Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 257.00 +/- 38.81 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga veluchs -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga veluchs -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga veluchs ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 10000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 100000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
yubuu/path-to-save-model
yubuu
2023-07-13T08:03:07Z
30
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "dreambooth", "base_model:runwayml/stable-diffusion-v1-5", "base_model:finetune:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-13T07:51:30Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 instance_prompt: a photo of sks dog tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - dreambooth inference: true --- # DreamBooth - yubuu/path-to-save-model This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: False.
haxett333/RL-Reinforce-100TrainEpisodesInsteadof1000
haxett333
2023-07-13T08:00:13Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-07-13T08:00:09Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: RL-Reinforce-100TrainEpisodesInsteadof1000 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 98.70 +/- 36.77 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
saeedehj/led-base-finetune-cnn
saeedehj
2023-07-13T07:50:12Z
34
0
transformers
[ "transformers", "pytorch", "tensorboard", "led", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-07-12T22:27:22Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - rouge model-index: - name: led-base-16384-finetune-cnn results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # led-base-16384-finetune-cnn This model is a fine-tuned version of [allenai/led-base-16384](https://huggingface.co/allenai/led-base-16384) on the cnn_dailymail dataset. It achieves the following results on the evaluation set: - Loss: 3.2020 - Rouge1: 24.2258 - Rouge2: 9.0151 - Rougel: 19.0336 - Rougelsum: 22.2604 - Gen Len: 20.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 1.8988 | 1.0 | 2000 | 2.0031 | 25.1709 | 10.0426 | 20.1311 | 23.1639 | 20.0 | | 1.6038 | 2.0 | 4000 | 2.0314 | 25.0213 | 9.8701 | 19.8987 | 23.0129 | 20.0 | | 1.3352 | 3.0 | 6000 | 2.1124 | 24.99 | 9.905 | 19.9566 | 23.0973 | 20.0 | | 1.1173 | 4.0 | 8000 | 2.2055 | 25.0568 | 10.0949 | 19.9602 | 23.18 | 20.0 | | 0.9566 | 5.0 | 10000 | 2.3262 | 24.941 | 9.5856 | 19.6285 | 23.042 | 20.0 | | 0.7986 | 6.0 | 12000 | 2.4489 | 24.4114 | 9.2808 | 19.3296 | 22.5481 | 20.0 | | 0.6685 | 7.0 | 14000 | 2.5211 | 24.467 | 9.5124 | 19.2685 | 22.5624 | 20.0 | | 0.5601 | 8.0 | 16000 | 2.6299 | 24.6939 | 9.6533 | 19.4627 | 22.8048 | 20.0 | | 0.4757 | 9.0 | 18000 | 2.7185 | 24.2098 | 9.1232 | 19.0181 | 22.4085 | 20.0 | | 0.3926 | 10.0 | 20000 | 2.7947 | 24.5092 | 9.3964 | 19.2593 | 22.5592 | 20.0 | | 0.3391 | 11.0 | 22000 | 2.8626 | 24.4731 | 9.3634 | 19.2966 | 22.5688 | 20.0 | | 0.2872 | 12.0 | 24000 | 2.9175 | 24.5587 | 9.3888 | 19.3335 | 22.6443 | 20.0 | | 0.2479 | 13.0 | 26000 | 2.9658 | 24.2983 | 9.1038 | 19.019 | 22.3675 | 20.0 | | 0.213 | 14.0 | 28000 | 3.0273 | 24.4196 | 9.1481 | 19.0458 | 22.5135 | 20.0 | | 0.1828 | 15.0 | 30000 | 3.0751 | 24.3283 | 9.2334 | 18.9771 | 22.3322 | 20.0 | | 0.1608 | 16.0 | 32000 | 3.1185 | 24.3965 | 9.2047 | 19.0899 | 22.4666 | 20.0 | | 0.1442 | 17.0 | 34000 | 3.1494 | 24.3832 | 9.1915 | 19.077 | 22.4366 | 20.0 | | 0.1293 | 18.0 | 36000 | 3.1738 | 24.3796 | 9.1132 | 19.1015 | 22.3862 | 20.0 | | 0.1165 | 19.0 | 38000 | 3.2073 | 24.2804 | 9.1018 | 19.0692 | 22.3023 | 20.0 | | 0.1118 | 20.0 | 40000 | 3.2020 | 24.2258 | 9.0151 | 19.0336 | 22.2604 | 20.0 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
jslin09/LegalChatbot-bloom-3b
jslin09
2023-07-13T07:45:16Z
19
0
peft
[ "peft", "region:us" ]
null
2023-07-06T02:44:57Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0
JeffreyHuang/llm-selector
JeffreyHuang
2023-07-13T07:30:31Z
45
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-06-27T04:16:52Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: llm-selector results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llm-selector This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.7315 - Accuracy: 0.5048 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 118 | 1.8920 | 0.3714 | | No log | 2.0 | 236 | 1.7753 | 0.5143 | | No log | 3.0 | 354 | 1.7671 | 0.4952 | | No log | 4.0 | 472 | 1.7441 | 0.5048 | | 1.8665 | 5.0 | 590 | 1.7315 | 0.5048 | | 1.8665 | 6.0 | 708 | 1.7413 | 0.5048 | | 1.8665 | 7.0 | 826 | 1.7378 | 0.4667 | | 1.8665 | 8.0 | 944 | 1.7426 | 0.4667 | | 1.7254 | 9.0 | 1062 | 1.7513 | 0.4476 | | 1.7254 | 10.0 | 1180 | 1.7513 | 0.4476 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1 - Datasets 2.12.0 - Tokenizers 0.13.3
vineetsharma/dqn-SpaceInvadersNoFrameskip-v4
vineetsharma
2023-07-13T07:13:19Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-13T07:12:43Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 560.00 +/- 101.24 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga vineetsharma -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga vineetsharma -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga vineetsharma ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
kaelee/llava-lightning-mpt-7b-chat-pretrain
kaelee
2023-07-13T07:08:09Z
14
0
transformers
[ "transformers", "pytorch", "llava_mpt", "text-generation", "custom_code", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-07-13T00:20:35Z
--- license: cc-by-nc-sa-4.0 ---
aiacademy131/opt-2.7b-lora
aiacademy131
2023-07-13T06:34:01Z
1
0
peft
[ "peft", "region:us" ]
null
2023-07-13T05:36:48Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.4.0.dev0
smithlai/q-FrozenLake-v1-4x4-noSlippery
smithlai
2023-07-13T06:33:59Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-13T06:33:57Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="smithlai/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
localmodels/WizardLM-13B-v1.1-GPTQ
localmodels
2023-07-13T06:11:46Z
7
0
transformers
[ "transformers", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-07-13T06:11:46Z
--- duplicated_from: localmodels/LLM --- # WizardLM 13B v1.1 GPTQ From: https://huggingface.co/WizardLM/WizardLM-13B-V1.1 --- | Model | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With | Description | | ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- | | wizardlm-13b-v1.1-GPTQ-4bit-128g.no-act.order | 4 | 128 | False | 7.45 GB | True | GPTQ-for-LLaMa | Most compatible. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. |
YanJiangJerry/SA-tweet-roberta-large-e4-w1-1.5-b16-m4
YanJiangJerry
2023-07-13T05:42:53Z
105
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-13T05:19:19Z
--- tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: SA-tweet-roberta-large-e4-w1-1.5-b16-m4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SA-tweet-roberta-large-e4-w1-1.5-b16-m4 This model is a fine-tuned version of [Amalq/autotrain-smm4h_large_roberta_clean-874027878](https://huggingface.co/Amalq/autotrain-smm4h_large_roberta_clean-874027878) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3545 - Accuracy: 0.945 - F1: 0.9511 - Precision: 0.9537 - Recall: 0.9486 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | No log | 1.0 | 285 | 0.1933 | 0.92 | 0.9290 | 0.9306 | 0.9273 | | 0.2508 | 2.0 | 570 | 0.2097 | 0.933 | 0.9411 | 0.9337 | 0.9486 | | 0.2508 | 3.0 | 855 | 0.2958 | 0.937 | 0.9450 | 0.9312 | 0.9592 | | 0.0947 | 4.0 | 1140 | 0.3545 | 0.945 | 0.9511 | 0.9537 | 0.9486 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
preetham/rpanda
preetham
2023-07-13T05:42:03Z
30
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "dreambooth", "base_model:CompVis/stable-diffusion-v1-4", "base_model:finetune:CompVis/stable-diffusion-v1-4", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-13T05:23:12Z
--- license: creativeml-openrail-m base_model: CompVis/stable-diffusion-v1-4 instance_prompt: a photo of sks panda tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - dreambooth inference: true --- # DreamBooth - preetham/rpanda This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks panda using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: False.
localmodels/Guanaco-33B-GPTQ
localmodels
2023-07-13T05:28:12Z
5
0
transformers
[ "transformers", "llama", "text-generation", "arxiv:2305.14314", "arxiv:2302.13971", "arxiv:2304.07327", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-07-13T05:28:12Z
--- duplicated_from: localmodels/LLM --- # Guanaco 33B GPTQ From: https://huggingface.co/timdettmers/guanaco-33b-merged --- ## Model * Guanaco-33B-GPTQ-4bit.act-order.safetensors * Works with all versions of GPTQ-for-LLaMa code, both Triton and CUDA branches * Works with AutoGPTQ * Parameters: Groupsize = None. --act-order. --- # Guanaco Models Based on LLaMA | [Paper](https://arxiv.org/abs/2305.14314) | [Code](https://github.com/artidoro/qlora) | [Demo](https://huggingface.co/spaces/uwnlp/guanaco-playground-tgi) | **The Guanaco models are open-source finetuned chatbots obtained through 4-bit QLoRA tuning of LLaMA base models on the OASST1 dataset. They are available in 7B, 13B, 33B, and 65B parameter sizes.** ⚠️Guanaco is a model purely intended for research purposes and could produce problematic outputs. ## Why use Guanaco? - **Competitive with commercial chatbot systems on the Vicuna and OpenAssistant benchmarks** (ChatGPT and BARD) according to human and GPT-4 raters. We note that the relative performance on tasks not covered in these benchmarks could be very different. In addition, commercial systems evolve over time (we used outputs from the March 2023 version of the models). - **Available open-source for research purposes**. Guanaco models allow *cheap* and *local* experimentation with high-quality chatbot systems. - **Replicable and efficient training procedure** that can be extended to new use cases. Guanaco training scripts are available in the [QLoRA repo](https://github.com/artidoro/qlora). - **Rigorous comparison to 16-bit methods** (both 16-bit full-finetuning and LoRA) in [our paper](https://arxiv.org/abs/2305.14314) demonstrates the effectiveness of 4-bit QLoRA finetuning. - **Lightweight** checkpoints which only contain adapter weights. ## License and Intended Use Guanaco adapter weights are available under Apache 2 license. Note the use of the Guanaco adapter weights, requires access to the LLaMA model weighs. Guanaco is based on LLaMA and therefore should be used according to the LLaMA license. ## Usage Here is an example of how you would load Guanaco 7B in 4-bits: ```python import torch from peft import PeftModel from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig model_name = "huggyllama/llama-7b" adapters_name = 'timdettmers/guanaco-7b' model = AutoModelForCausalLM.from_pretrained( model_name, load_in_4bit=True, torch_dtype=torch.bfloat16, device_map="auto", max_memory= {i: '24000MB' for i in range(torch.cuda.device_count())}, quantization_config=BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_compute_dtype=torch.bfloat16, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type='nf4' ), ) model = PeftModel.from_pretrained(model, adapters_name) tokenizer = AutoTokenizer.from_pretrained(model_name) ``` Inference can then be performed as usual with HF models as follows: ```python prompt = "Introduce yourself" formatted_prompt = ( f"A chat between a curious human and an artificial intelligence assistant." f"The assistant gives helpful, detailed, and polite answers to the user's questions.\n" f"### Human: {prompt} ### Assistant:" ) inputs = tokenizer(formatted_prompt, return_tensors="pt").to("cuda:0") outputs = model.generate(inputs=inputs.input_ids, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` Expected output similar to the following: ``` A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. ### Human: Introduce yourself ### Assistant: I am an artificial intelligence assistant. I am here to help you with any questions you may have. ``` ## Current Inference Limitations Currently, 4-bit inference is slow. We recommend loading in 16 bits if inference speed is a concern. We are actively working on releasing efficient 4-bit inference kernels. Below is how you would load the model in 16 bits: ```python model_name = "huggyllama/llama-7b" adapters_name = 'timdettmers/guanaco-7b' model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype=torch.bfloat16, device_map="auto", max_memory= {i: '24000MB' for i in range(torch.cuda.device_count())}, ) model = PeftModel.from_pretrained(model, adapters_name) tokenizer = AutoTokenizer.from_pretrained(model_name) ``` ## Model Card **Architecture**: The Guanaco models are LoRA adapters to be used on top of LLaMA models. They are added to all layers. For all model sizes, we use $r=64$. **Base Model**: Guanaco uses LLaMA as base model with sizes 7B, 13B, 33B, 65B. LLaMA is a causal language model pretrained on a large corpus of text. See [LLaMA paper](https://arxiv.org/abs/2302.13971) for more details. Note that Guanaco can inherit biases and limitations of the base model. **Finetuning Data**: Guanaco is finetuned on OASST1. The exact dataset is available at [timdettmers/openassistant-guanaco](https://huggingface.co/datasets/timdettmers/openassistant-guanaco). **Languages**: The OASST1 dataset is multilingual (see [the paper](https://arxiv.org/abs/2304.07327) for details) and as such Guanaco responds to user queries in different languages. We note, however, that OASST1 is heavy in high-resource languages. In addition, human evaluation of Guanaco was only performed in English and based on qualitative analysis we observed degradation in performance in other languages. Next, we describe Training and Evaluation details. ### Training Guanaco models are the result of 4-bit QLoRA supervised finetuning on the OASST1 dataset. All models use NormalFloat4 datatype for the base model and LoRA adapters on all linear layers with BFloat16 as computation datatype. We set LoRA $r=64$, $\alpha=16$. We also use Adam beta2 of 0.999, max grad norm of 0.3 and LoRA dropout of 0.1 for models up to 13B and 0.05 for 33B and 65B models. For the finetuning process, we use constant learning rate schedule and paged AdamW optimizer. ### Training hyperparameters Size| Dataset | Batch Size | Learning Rate | Max Steps | Sequence length ---|---|---|---|---|--- 7B | OASST1 | 16 | 2e-4 | 1875 | 512 13B | OASST1 | 16 | 2e-4 | 1875 | 512 33B | OASST1 | 16 | 1e-4 | 1875 | 512 65B | OASST1 | 16 | 1e-4 | 1875 | 512 ### Evaluation We test generative language capabilities through both automated and human evaluations. This second set of evaluations relies on queries curated by humans and aims at measuring the quality of model responses. We use the Vicuna and OpenAssistant datasets with 80 and 953 prompts respectively. In both human and automated evaluations, for each prompt, raters compare all pairs of responses across the models considered. For human raters we randomize the order of the systems, for GPT-4 we evaluate with both orders. Benchmark | Vicuna | | Vicuna | | OpenAssistant | | - -----------|----|-----|--------|---|---------------|---|--- Prompts | 80 | | 80 | | 953 | | Judge | Human | | GPT-4 | | GPT-4 | | Model | Elo | Rank | Elo | Rank | Elo | Rank | **Median Rank** GPT-4 | 1176 | 1 | 1348 | 1 | 1294 | 1 | 1 Guanaco-65B | 1023 | 2 | 1022 | 2 | 1008 | 3 | 2 Guanaco-33B | 1009 | 4 | 992 | 3 | 1002 | 4 | 4 ChatGPT-3.5 Turbo | 916 | 7 | 966 | 5 | 1015 | 2 | 5 Vicuna-13B | 984 | 5 | 974 | 4 | 936 | 5 | 5 Guanaco-13B | 975 | 6 | 913 | 6 | 885 | 6 | 6 Guanaco-7B | 1010 | 3 | 879 | 8 | 860 | 7 | 7 Bard | 909 | 8 | 902 | 7 | - | - | 8 We also use the MMLU benchmark to measure performance on a range of language understanding tasks. This is a multiple-choice benchmark covering 57 tasks including elementary mathematics, US history, computer science, law, and more. We report 5-shot test accuracy. Dataset | 7B | 13B | 33B | 65B ---|---|---|---|--- LLaMA no tuning | 35.1 | 46.9 | 57.8 | 63.4 Self-Instruct | 36.4 | 33.3 | 53.0 | 56.7 Longform | 32.1 | 43.2 | 56.6 | 59.7 Chip2 | 34.5 | 41.6 | 53.6 | 59.8 HH-RLHF | 34.9 | 44.6 | 55.8 | 60.1 Unnatural Instruct | 41.9 | 48.1 | 57.3 | 61.3 OASST1 (Guanaco) | 36.6 | 46.4 | 57.0 | 62.2 Alpaca | 38.8 | 47.8 | 57.3 | 62.5 FLAN v2 | 44.5 | 51.4 | 59.2 | 63.9 ## Risks and Biases The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. The model was trained on various public datasets; it is possible that this model could generate lewd, biased, or otherwise offensive outputs. However, we note that finetuning on OASST1 seems to reduce biases as measured on the CrowS dataset. We report here the performance of Guanaco-65B compared to other baseline models on the CrowS dataset. | | LLaMA-65B | GPT-3 | OPT-175B | Guanaco-65B | |----------------------|-----------|-------|----------|---------------| | Gender | 70.6 | 62.6 | 65.7 | **47.5** | | Religion | {79.0} | 73.3 | 68.6 | **38.7** | | Race/Color | 57.0 | 64.7 | 68.6 | **45.3** | | Sexual orientation | {81.0} | 76.2 | 78.6 | **59.1** | | Age | 70.1 | 64.4 | 67.8 | **36.3** | | Nationality | 64.2 | 61.6 | 62.9 | **32.4** | | Disability | 66.7 | 76.7 | 76.7 | **33.9** | | Physical appearance | 77.8 | 74.6 | 76.2 | **43.1** | | Socioeconomic status | 71.5 | 73.8 | 76.2 | **55.3** | | Average | 66.6 | 67.2 | 69.5 | **43.5** | ## Citation ```bibtex @article{dettmers2023qlora, title={QLoRA: Efficient Finetuning of Quantized LLMs}, author={Dettmers, Tim and Pagnoni, Artidoro and Holtzman, Ari and Zettlemoyer, Luke}, journal={arXiv preprint arXiv:2305.14314}, year={2023} } ```
localmodels/Guanaco-65B-GPTQ
localmodels
2023-07-13T05:21:10Z
7
4
transformers
[ "transformers", "llama", "text-generation", "arxiv:2305.14314", "arxiv:2302.13971", "arxiv:2304.07327", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-05-28T21:51:04Z
# Guanaco 65B GPTQ From: https://huggingface.co/timdettmers/guanaco-65b --- ## Model * guanaco-65b-4bit.safetensors * Works with all versions of GPTQ-for-LLaMa code, both Triton and CUDA branches * Works with AutoGPTQ * Parameters: Groupsize = None. act-order --- # Guanaco Models Based on LLaMA | [Paper](https://arxiv.org/abs/2305.14314) | [Code](https://github.com/artidoro/qlora) | [Demo](https://huggingface.co/spaces/uwnlp/guanaco-playground-tgi) | **The Guanaco models are open-source finetuned chatbots obtained through 4-bit QLoRA tuning of LLaMA base models on the OASST1 dataset. They are available in 7B, 13B, 33B, and 65B parameter sizes.** ⚠️Guanaco is a model purely intended for research purposes and could produce problematic outputs. ## Why use Guanaco? - **Competitive with commercial chatbot systems on the Vicuna and OpenAssistant benchmarks** (ChatGPT and BARD) according to human and GPT-4 raters. We note that the relative performance on tasks not covered in these benchmarks could be very different. In addition, commercial systems evolve over time (we used outputs from the March 2023 version of the models). - **Available open-source for research purposes**. Guanaco models allow *cheap* and *local* experimentation with high-quality chatbot systems. - **Replicable and efficient training procedure** that can be extended to new use cases. Guanaco training scripts are available in the [QLoRA repo](https://github.com/artidoro/qlora). - **Rigorous comparison to 16-bit methods** (both 16-bit full-finetuning and LoRA) in [our paper](https://arxiv.org/abs/2305.14314) demonstrates the effectiveness of 4-bit QLoRA finetuning. - **Lightweight** checkpoints which only contain adapter weights. ## License and Intended Use Guanaco adapter weights are available under Apache 2 license. Note the use of the Guanaco adapter weights, requires access to the LLaMA model weighs. Guanaco is based on LLaMA and therefore should be used according to the LLaMA license. ## Usage Here is an example of how you would load Guanaco 7B in 4-bits: ```python import torch from peft import PeftModel from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig model_name = "huggyllama/llama-7b" adapters_name = 'timdettmers/guanaco-7b' model = AutoModelForCausalLM.from_pretrained( model_name, load_in_4bit=True, torch_dtype=torch.bfloat16, device_map="auto", max_memory= {i: '24000MB' for i in range(torch.cuda.device_count())}, quantization_config=BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_compute_dtype=torch.bfloat16, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type='nf4' ), ) model = PeftModel.from_pretrained(model, adapters_name) tokenizer = AutoTokenizer.from_pretrained(model_name) ``` Inference can then be performed as usual with HF models as follows: ```python prompt = "Introduce yourself" formatted_prompt = ( f"A chat between a curious human and an artificial intelligence assistant." f"The assistant gives helpful, detailed, and polite answers to the user's questions.\n" f"### Human: {prompt} ### Assistant:" ) inputs = tokenizer(formatted_prompt, return_tensors="pt").to("cuda:0") outputs = model.generate(inputs=inputs.input_ids, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` Expected output similar to the following: ``` A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. ### Human: Introduce yourself ### Assistant: I am an artificial intelligence assistant. I am here to help you with any questions you may have. ``` ## Current Inference Limitations Currently, 4-bit inference is slow. We recommend loading in 16 bits if inference speed is a concern. We are actively working on releasing efficient 4-bit inference kernels. Below is how you would load the model in 16 bits: ```python model_name = "huggyllama/llama-7b" adapters_name = 'timdettmers/guanaco-7b' model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype=torch.bfloat16, device_map="auto", max_memory= {i: '24000MB' for i in range(torch.cuda.device_count())}, ) model = PeftModel.from_pretrained(model, adapters_name) tokenizer = AutoTokenizer.from_pretrained(model_name) ``` ## Model Card **Architecture**: The Guanaco models are LoRA adapters to be used on top of LLaMA models. They are added to all layers. For all model sizes, we use $r=64$. **Base Model**: Guanaco uses LLaMA as base model with sizes 7B, 13B, 33B, 65B. LLaMA is a causal language model pretrained on a large corpus of text. See [LLaMA paper](https://arxiv.org/abs/2302.13971) for more details. Note that Guanaco can inherit biases and limitations of the base model. **Finetuning Data**: Guanaco is finetuned on OASST1. The exact dataset is available at [timdettmers/openassistant-guanaco](https://huggingface.co/datasets/timdettmers/openassistant-guanaco). **Languages**: The OASST1 dataset is multilingual (see [the paper](https://arxiv.org/abs/2304.07327) for details) and as such Guanaco responds to user queries in different languages. We note, however, that OASST1 is heavy in high-resource languages. In addition, human evaluation of Guanaco was only performed in English and based on qualitative analysis we observed degradation in performance in other languages. Next, we describe Training and Evaluation details. ### Training Guanaco models are the result of 4-bit QLoRA supervised finetuning on the OASST1 dataset. All models use NormalFloat4 datatype for the base model and LoRA adapters on all linear layers with BFloat16 as computation datatype. We set LoRA $r=64$, $\alpha=16$. We also use Adam beta2 of 0.999, max grad norm of 0.3 and LoRA dropout of 0.1 for models up to 13B and 0.05 for 33B and 65B models. For the finetuning process, we use constant learning rate schedule and paged AdamW optimizer. ### Training hyperparameters Size| Dataset | Batch Size | Learning Rate | Max Steps | Sequence length ---|---|---|---|---|--- 7B | OASST1 | 16 | 2e-4 | 1875 | 512 13B | OASST1 | 16 | 2e-4 | 1875 | 512 33B | OASST1 | 16 | 1e-4 | 1875 | 512 65B | OASST1 | 16 | 1e-4 | 1875 | 512 ### Evaluation We test generative language capabilities through both automated and human evaluations. This second set of evaluations relies on queries curated by humans and aims at measuring the quality of model responses. We use the Vicuna and OpenAssistant datasets with 80 and 953 prompts respectively. In both human and automated evaluations, for each prompt, raters compare all pairs of responses across the models considered. For human raters we randomize the order of the systems, for GPT-4 we evaluate with both orders. Benchmark | Vicuna | | Vicuna | | OpenAssistant | | - -----------|----|-----|--------|---|---------------|---|--- Prompts | 80 | | 80 | | 953 | | Judge | Human | | GPT-4 | | GPT-4 | | Model | Elo | Rank | Elo | Rank | Elo | Rank | **Median Rank** GPT-4 | 1176 | 1 | 1348 | 1 | 1294 | 1 | 1 Guanaco-65B | 1023 | 2 | 1022 | 2 | 1008 | 3 | 2 Guanaco-33B | 1009 | 4 | 992 | 3 | 1002 | 4 | 4 ChatGPT-3.5 Turbo | 916 | 7 | 966 | 5 | 1015 | 2 | 5 Vicuna-13B | 984 | 5 | 974 | 4 | 936 | 5 | 5 Guanaco-13B | 975 | 6 | 913 | 6 | 885 | 6 | 6 Guanaco-7B | 1010 | 3 | 879 | 8 | 860 | 7 | 7 Bard | 909 | 8 | 902 | 7 | - | - | 8 We also use the MMLU benchmark to measure performance on a range of language understanding tasks. This is a multiple-choice benchmark covering 57 tasks including elementary mathematics, US history, computer science, law, and more. We report 5-shot test accuracy. Dataset | 7B | 13B | 33B | 65B ---|---|---|---|--- LLaMA no tuning | 35.1 | 46.9 | 57.8 | 63.4 Self-Instruct | 36.4 | 33.3 | 53.0 | 56.7 Longform | 32.1 | 43.2 | 56.6 | 59.7 Chip2 | 34.5 | 41.6 | 53.6 | 59.8 HH-RLHF | 34.9 | 44.6 | 55.8 | 60.1 Unnatural Instruct | 41.9 | 48.1 | 57.3 | 61.3 OASST1 (Guanaco) | 36.6 | 46.4 | 57.0 | 62.2 Alpaca | 38.8 | 47.8 | 57.3 | 62.5 FLAN v2 | 44.5 | 51.4 | 59.2 | 63.9 ## Risks and Biases The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. The model was trained on various public datasets; it is possible that this model could generate lewd, biased, or otherwise offensive outputs. However, we note that finetuning on OASST1 seems to reduce biases as measured on the CrowS dataset. We report here the performance of Guanaco-65B compared to other baseline models on the CrowS dataset. | | LLaMA-65B | GPT-3 | OPT-175B | Guanaco-65B | |----------------------|-----------|-------|----------|---------------| | Gender | 70.6 | 62.6 | 65.7 | **47.5** | | Religion | {79.0} | 73.3 | 68.6 | **38.7** | | Race/Color | 57.0 | 64.7 | 68.6 | **45.3** | | Sexual orientation | {81.0} | 76.2 | 78.6 | **59.1** | | Age | 70.1 | 64.4 | 67.8 | **36.3** | | Nationality | 64.2 | 61.6 | 62.9 | **32.4** | | Disability | 66.7 | 76.7 | 76.7 | **33.9** | | Physical appearance | 77.8 | 74.6 | 76.2 | **43.1** | | Socioeconomic status | 71.5 | 73.8 | 76.2 | **55.3** | | Average | 66.6 | 67.2 | 69.5 | **43.5** | ## Citation ```bibtex @article{dettmers2023qlora, title={QLoRA: Efficient Finetuning of Quantized LLMs}, author={Dettmers, Tim and Pagnoni, Artidoro and Holtzman, Ari and Zettlemoyer, Luke}, journal={arXiv preprint arXiv:2305.14314}, year={2023} } ```
vertxlabs/controlnet_qrcode-control_v11p_v1
vertxlabs
2023-07-13T05:04:14Z
13
0
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "controlnet", "image-to-image", "en", "license:openrail++", "endpoints_compatible", "region:us" ]
image-to-image
2023-07-13T03:45:24Z
--- tags: - stable-diffusion - controlnet - image-to-image license: openrail++ language: - en pipeline_tag: image-to-image --- # QR Code Conditioned ControlNet Models for Stable Diffusion 2.1 ![1](https://www.dropbox.com/s/c1kx64v1cpsh2mp/1.png?raw=1) ## Model Description This repo holds the safetensors & diffusers versions of the QR code conditioned ControlNet for Stable Diffusion v2.1. The Stable Diffusion 2.1 version is marginally more effective, as it was developed to address my specific needs. However, a 1.5 version model was also trained on the same dataset for those who are using the older version. ## How to use with diffusers ```bash pip -q install diffusers transformers accelerate torch xformers ``` ```python import torch from PIL import Image from diffusers import StableDiffusionControlNetImg2ImgPipeline, ControlNetModel, DDIMScheduler from diffusers.utils import load_image controlnet = ControlNetModel.from_pretrained("DionTimmer/controlnet_qrcode-control_v11p_sd21", torch_dtype=torch.float16) pipe = StableDiffusionControlNetImg2ImgPipeline.from_pretrained( "stabilityai/stable-diffusion-2-1", controlnet=controlnet, safety_checker=None, torch_dtype=torch.float16 ) pipe.enable_xformers_memory_efficient_attention() pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config) pipe.enable_model_cpu_offload() def resize_for_condition_image(input_image: Image, resolution: int): input_image = input_image.convert("RGB") W, H = input_image.size k = float(resolution) / min(H, W) H *= k W *= k H = int(round(H / 64.0)) * 64 W = int(round(W / 64.0)) * 64 img = input_image.resize((W, H), resample=Image.LANCZOS) return img # play with guidance_scale, controlnet_conditioning_scale and strength to make a valid QR Code Image # qr code image source_image = load_image("https://s3.amazonaws.com/moonup/production/uploads/6064e095abd8d3692e3e2ed6/A_RqHaAM6YHBodPLwqtjn.png") # initial image, anything init_image = load_image("https://s3.amazonaws.com/moonup/production/uploads/noauth/KfMBABpOwIuNolv1pe3qX.jpeg") condition_image = resize_for_condition_image(source_image, 768) init_image = resize_for_condition_image(init_image, 768) generator = torch.manual_seed(123121231) image = pipe(prompt="a bilboard in NYC with a qrcode", negative_prompt="ugly, disfigured, low quality, blurry, nsfw", image=init_image, control_image=condition_image, width=768, height=768, guidance_scale=20, controlnet_conditioning_scale=1.5, generator=generator, strength=0.9, num_inference_steps=150, ) image.images[0] ``` ## Performance and Limitations These models perform quite well in most cases, but please note that they are not 100% accurate. In some instances, the QR code shape might not come through as expected. You can increase the ControlNet weight to emphasize the QR code shape. However, be cautious as this might negatively impact the style of your output.**To optimize for scanning, please generate your QR codes with correction mode 'H' (30%).** To balance between style and shape, a gentle fine-tuning of the control weight might be required based on the individual input and the desired output, aswell as the correct prompt. Some prompts do not work until you increase the weight by a lot. The process of finding the right balance between these factors is part art and part science. For the best results, it is recommended to generate your artwork at a resolution of 768. This allows for a higher level of detail in the final product, enhancing the quality and effectiveness of the QR code-based artwork. ## Installation The simplest way to use this is to place the .safetensors model and its .yaml config file in the folder where your other controlnet models are installed, which varies per application. For usage in auto1111 they can be placed in the webui/models/ControlNet folder. They can be loaded using the controlnet webui extension which you can install through the extensions tab in the webui (https://github.com/Mikubill/sd-webui-controlnet). Make sure to enable your controlnet unit and set your input image as the QR code. Set the model to either the SD2.1 or 1.5 version depending on your base stable diffusion model, or it will error. No pre-processor is needed, though you can use the invert pre-processor for a different variation of results. 768 is the preferred resolution for generation since it allows for more detail. Make sure to look up additional info on how to use controlnet if you get stuck, once you have the webui up and running its really easy to install the controlnet extension aswell.
at2507/finetuned_model
at2507
2023-07-13T04:56:59Z
103
4
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-10T08:51:09Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: finetuned_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_model This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on a [Financial News Tweet Dataset](https://huggingface.co/datasets/zeroshot/twitter-financial-news-sentiment). It achieves the following results on the evaluation set: - Loss: 0.9382 - Accuracy: 0.803 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 125 | 0.6514 | 0.783 | | No log | 2.0 | 250 | 0.6665 | 0.775 | | No log | 3.0 | 375 | 0.9382 | 0.803 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
FelixChao/falcon-7b-instruct-ft-adapters-ESG-chatting
FelixChao
2023-07-13T04:55:48Z
3
0
peft
[ "peft", "region:us" ]
null
2023-07-13T04:55:35Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0
YanJiangJerry/SA-tweet-roberta-large-e4-w1-1.5-b16
YanJiangJerry
2023-07-13T04:53:22Z
105
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-13T04:17:05Z
--- tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: SA-tweet-roberta-large-e4-w1-1.5-b16 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SA-tweet-roberta-large-e4-w1-1.5-b16 This model is a fine-tuned version of [Amalq/autotrain-smm4h_large_roberta_clean-874027878](https://huggingface.co/Amalq/autotrain-smm4h_large_roberta_clean-874027878) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6396 - Accuracy: 0.9166 - F1: 0.8872 - Precision: 0.8939 - Recall: 0.8806 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | 0.2895 | 1.0 | 581 | 0.4026 | 0.9110 | 0.8806 | 0.8806 | 0.8806 | | 0.1182 | 2.0 | 1162 | 0.6190 | 0.9110 | 0.8754 | 0.9153 | 0.8388 | | 0.0589 | 3.0 | 1743 | 0.6167 | 0.9155 | 0.8838 | 0.9060 | 0.8627 | | 0.0211 | 4.0 | 2324 | 0.6396 | 0.9166 | 0.8872 | 0.8939 | 0.8806 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
ui-chope/distilbert-base-uncased
ui-chope
2023-07-13T04:52:42Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-07-05T01:45:44Z
--- tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: distilbert-base-uncased results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1298 - Precision: 0.9739 - Recall: 0.9617 - F1: 0.9678 - Accuracy: 0.9837 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 11 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0218 | 1.0 | 5296 | 0.0828 | 0.9609 | 0.9609 | 0.9609 | 0.9842 | | 0.0159 | 2.0 | 10592 | 0.1135 | 0.9677 | 0.9602 | 0.9639 | 0.9820 | | 0.0137 | 3.0 | 15888 | 0.0846 | 0.9631 | 0.9570 | 0.9600 | 0.9831 | | 0.0074 | 4.0 | 21184 | 0.1179 | 0.9621 | 0.9523 | 0.9572 | 0.9804 | | 0.0058 | 5.0 | 26480 | 0.1080 | 0.9763 | 0.9664 | 0.9713 | 0.9857 | | 0.0056 | 6.0 | 31776 | 0.1273 | 0.9685 | 0.9594 | 0.9639 | 0.9828 | | 0.0055 | 7.0 | 37072 | 0.1451 | 0.9637 | 0.9531 | 0.9584 | 0.9800 | | 0.0035 | 8.0 | 42368 | 0.1345 | 0.9707 | 0.9563 | 0.9634 | 0.9805 | | 0.0027 | 9.0 | 47664 | 0.1242 | 0.9739 | 0.9633 | 0.9686 | 0.9852 | | 0.0018 | 10.0 | 52960 | 0.1232 | 0.9739 | 0.9633 | 0.9686 | 0.9844 | | 0.0017 | 11.0 | 58256 | 0.1298 | 0.9739 | 0.9617 | 0.9678 | 0.9837 | ### Framework versions - Transformers 4.30.2 - Pytorch 1.13.1+cu117 - Datasets 2.13.1 - Tokenizers 0.13.3
localmodels/Vicuna-7B-v1.3-GPTQ
localmodels
2023-07-13T04:47:45Z
15
0
transformers
[ "transformers", "llama", "text-generation", "arxiv:2302.13971", "arxiv:2306.05685", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-07-13T04:47:41Z
--- duplicated_from: localmodels/LLM --- # Vicuna 7B v1.3 GPTQ From LMSYS: https://huggingface.co/lmsys/vicuna-7b-v1.3 --- | Branch | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With | Description | | ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- | | vicuna-7b-v1.3-GPTQ-4bit-128g.no-act.order | 4 | 128 | False | 4.00 GB | True | GPTQ-for-LLaMa | Most compatible. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. | --- # Vicuna Model Card ## Model Details Vicuna is a chat assistant trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT. - **Developed by:** [LMSYS](https://lmsys.org/) - **Model type:** An auto-regressive language model based on the transformer architecture. - **License:** Non-commercial license - **Finetuned from model:** [LLaMA](https://arxiv.org/abs/2302.13971). ### Model Sources - **Repository:** https://github.com/lm-sys/FastChat - **Blog:** https://lmsys.org/blog/2023-03-30-vicuna/ - **Paper:** https://arxiv.org/abs/2306.05685 - **Demo:** https://chat.lmsys.org/ ## Uses The primary use of Vicuna is research on large language models and chatbots. The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence. ## How to Get Started with the Model Command line interface: https://github.com/lm-sys/FastChat#vicuna-weights. APIs (OpenAI API, Huggingface API): https://github.com/lm-sys/FastChat/tree/main#api. ## Training Details Vicuna v1.3 is fine-tuned from LLaMA with supervised instruction fine-tuning. The training data is around 140K conversations collected from ShareGPT.com. See more details in the "Training Details of Vicuna Models" section in the appendix of this [paper](https://arxiv.org/pdf/2306.05685.pdf). ## Evaluation Vicuna is evaluated with standard benchmarks, human preference, and LLM-as-a-judge. See more details in this [paper](https://arxiv.org/pdf/2306.05685.pdf) and [leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard). ## Difference between different versions of Vicuna See [vicuna_weights_version.md](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md)
localmodels/Vicuna-13B-v1.3-GPTQ
localmodels
2023-07-13T04:45:19Z
6
0
transformers
[ "transformers", "llama", "text-generation", "arxiv:2302.13971", "arxiv:2306.05685", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-07-13T04:45:15Z
--- duplicated_from: localmodels/LLM --- # Vicuna 13B v1.3 GPTQ From LMSYS: https://huggingface.co/lmsys/vicuna-13b-v1.3 --- | Branch | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With | Description | | ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- | | vicuna-13b-v1.3.0-GPTQ-4bit-128g.no-act.order | 4 | 128 | False | 7.45 GB | True | GPTQ-for-LLaMa | Most compatible. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. | --- # Vicuna Model Card ## Model Details Vicuna is a chat assistant trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT. - **Developed by:** [LMSYS](https://lmsys.org/) - **Model type:** An auto-regressive language model based on the transformer architecture. - **License:** Non-commercial license - **Finetuned from model:** [LLaMA](https://arxiv.org/abs/2302.13971). ### Model Sources - **Repository:** https://github.com/lm-sys/FastChat - **Blog:** https://lmsys.org/blog/2023-03-30-vicuna/ - **Paper:** https://arxiv.org/abs/2306.05685 - **Demo:** https://chat.lmsys.org/ ## Uses The primary use of Vicuna is research on large language models and chatbots. The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence. ## How to Get Started with the Model Command line interface: https://github.com/lm-sys/FastChat#vicuna-weights. APIs (OpenAI API, Huggingface API): https://github.com/lm-sys/FastChat/tree/main#api. ## Training Details Vicuna v1.3 is fine-tuned from LLaMA with supervised instruction fine-tuning. The training data is around 140K conversations collected from ShareGPT.com. See more details in the "Training Details of Vicuna Models" section in the appendix of this [paper](https://arxiv.org/pdf/2306.05685.pdf). ## Evaluation Vicuna is evaluated with standard benchmarks, human preference, and LLM-as-a-judge. See more details in this [paper](https://arxiv.org/pdf/2306.05685.pdf) and [leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard). ## Difference between different versions of Vicuna See [vicuna_weights_version.md](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md)
insomeniaT/falcon-7b-uae-qapairs-67
insomeniaT
2023-07-13T04:40:37Z
10
1
peft
[ "peft", "text-generation", "en", "license:apache-2.0", "region:us" ]
text-generation
2023-07-07T19:21:06Z
--- license: apache-2.0 language: - en library_name: peft pipeline_tag: text-generation inference: false --- # PEFT Model Fine-tuned on UAE QA Pairs This repository contains a fine-tuned model based on the PEFT framework for question answering tasks. The model has been trained on a dataset of question and answer pairs related to the UAE. ## Installation Before using the model, make sure to install the necessary packages: ```sh pip install transformers pip install torch torchvision pip install peft ``` ## Usage The model can be used for generating responses to prompts. Here is an example: ```python from peft import PeftModel, PeftConfig from transformers import AutoModelForCausalLM, AutoTokenizer import torch peft_model_id = "insomeniaT/falcon-7b-uae-qapairs-67" config = PeftConfig.from_pretrained(peft_model_id) model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, trust_remote_code=True) model = PeftModel.from_pretrained(model, peft_model_id) tokenizer = AutoTokenizer.from_pretrained("tiiuae/falcon-7b") tokenizer.pad_token = tokenizer.eos_token text = "### Human: What is the minimum requirement for the UAE's GCC residency?? ### Assistant: " device = "cuda:0" inputs = tokenizer(text, return_tensors="pt") inputs.to(device) model.to(device) outputs = model.generate(input_ids=inputs["input_ids"], attention_mask=inputs["attention_mask"], max_new_tokens=300, pad_token_id=tokenizer.eos_token_id) result = tokenizer.decode(outputs[0], skip_special_tokens=True) print(result) ```
hoanghoavienvo/xlnet-large-cased-stage-2-ver1
hoanghoavienvo
2023-07-13T04:37:38Z
91
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlnet", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-13T03:34:49Z
--- license: mit tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: xlnet-large-cased-stage-2-ver1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlnet-large-cased-stage-2-ver1 This model is a fine-tuned version of [xlnet-large-cased](https://huggingface.co/xlnet-large-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4128 - Accuracy: 0.8317 - F1: 0.9022 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 469 | 0.4226 | 0.85 | 0.9189 | | 0.4839 | 2.0 | 938 | 0.3964 | 0.845 | 0.9141 | | 0.4284 | 3.0 | 1407 | 0.4128 | 0.8317 | 0.9022 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
kazuhidet/norurun
kazuhidet
2023-07-13T04:23:39Z
0
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "dreambooth", "base_model:CompVis/stable-diffusion-v1-4", "base_model:finetune:CompVis/stable-diffusion-v1-4", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-13T04:06:49Z
--- license: creativeml-openrail-m base_model: CompVis/stable-diffusion-v1-4 instance_prompt: a photo of mascot norurun tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - dreambooth inference: true --- # DreamBooth - kazuhidet/norurun This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of mascot norurun using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: False.
Ife/BM-FR
Ife
2023-07-13T04:17:55Z
106
0
transformers
[ "transformers", "pytorch", "marian", "text2text-generation", "bm", "fr", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:04Z
--- language: - bm - fr --- @inproceedings{adebara-abdul-mageed-2021-improving, title = "Improving Similar Language Translation With Transfer Learning", author = "Adebara, Ife and Abdul-Mageed, Muhammad", booktitle = "Proceedings of the Sixth Conference on Machine Translation", month = nov, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.wmt-1.27", pages = "273--278", abstract = "We investigate transfer learning based on pre-trained neural machine translation models to translate between (low-resource) similar languages. This work is part of our contribution to the WMT 2021 Similar Languages Translation Shared Task where we submitted models for different language pairs, including French-Bambara, Spanish-Catalan, and Spanish-Portuguese in both directions. Our models for Catalan-Spanish (82.79 BLEU)and Portuguese-Spanish (87.11 BLEU) rank top 1 in the official shared task evaluation, and we are the only team to submit models for the French-Bambara pairs.", }
AnirbanRC/flan_t5_small_finetuned_anirbanrc
AnirbanRC
2023-07-13T04:12:54Z
162
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:samsum", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-07-13T04:03:45Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - samsum metrics: - rouge model-index: - name: flan_t5_small_finetuned_anirbanrc results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: samsum type: samsum config: samsum split: train[:50] args: samsum metrics: - name: Rouge1 type: rouge value: 43.2639 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # flan_t5_small_finetuned_anirbanrc This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on the samsum dataset. It achieves the following results on the evaluation set: - Loss: 1.5172 - Rouge1: 43.2639 - Rouge2: 20.726 - Rougel: 37.0774 - Rougelsum: 39.6232 - Gen Len: 16.92 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | No log | 1.0 | 7 | 1.6379 | 42.0058 | 18.6227 | 35.3019 | 38.6413 | 17.36 | | No log | 2.0 | 14 | 1.5869 | 43.938 | 20.3595 | 36.876 | 40.0421 | 17.14 | | No log | 3.0 | 21 | 1.5483 | 43.3723 | 20.3935 | 36.9286 | 39.6476 | 17.0 | | No log | 4.0 | 28 | 1.5255 | 43.9774 | 21.5464 | 37.8954 | 40.5009 | 16.9 | | No log | 5.0 | 35 | 1.5172 | 43.2639 | 20.726 | 37.0774 | 39.6232 | 16.92 | ### Framework versions - Transformers 4.30.2 - Pytorch 1.13.1+cpu - Datasets 2.13.1 - Tokenizers 0.13.3
abbiezz/tomuntitled
abbiezz
2023-07-13T04:12:40Z
0
0
null
[ "license:openrail", "region:us" ]
null
2023-07-13T04:06:35Z
--- license: openrail --- https://drive.google.com/file/d/1qilU9BEfX7RY8q9Uohesz9qQa0R_B5PW/view?usp=drive_link
quyc/picture
quyc
2023-07-13T03:51:01Z
54
0
transformers
[ "transformers", "pytorch", "vit", "image-classification", "image-to-text", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-to-text
2023-07-12T07:05:17Z
--- pipeline_tag: image-to-text ---
rdyzakya/IndoLEGO-ABSA
rdyzakya
2023-07-13T03:43:17Z
113
1
transformers
[ "transformers", "pytorch", "mt5", "text2text-generation", "id", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-07-12T13:28:26Z
--- language: - id metrics: - f1 pipeline_tag: text2text-generation ---
kazuhidet/kasumi
kazuhidet
2023-07-13T03:35:32Z
1
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "dreambooth", "base_model:CompVis/stable-diffusion-v1-4", "base_model:finetune:CompVis/stable-diffusion-v1-4", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-13T03:18:42Z
--- license: creativeml-openrail-m base_model: CompVis/stable-diffusion-v1-4 instance_prompt: a photo of people kasumi tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - dreambooth inference: true --- # DreamBooth - kazuhidet/kasumi This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of people kasumi using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: False.
NasimB/gpt2-concat-all-base-rarity-all-iorder-est-5p5k
NasimB
2023-07-13T03:30:43Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-13T01:51:53Z
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: gpt2-concat-all-base-rarity-all-iorder-est-5p5k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-concat-all-base-rarity-all-iorder-est-5p5k This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 4.3322 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 6.7625 | 0.31 | 500 | 5.6584 | | 5.4053 | 0.63 | 1000 | 5.2182 | | 5.0653 | 0.94 | 1500 | 4.9736 | | 4.7706 | 1.25 | 2000 | 4.8109 | | 4.6273 | 1.56 | 2500 | 4.6831 | | 4.5134 | 1.88 | 3000 | 4.5789 | | 4.3042 | 2.19 | 3500 | 4.5166 | | 4.2107 | 2.5 | 4000 | 4.4533 | | 4.1747 | 2.82 | 4500 | 4.3963 | | 4.0257 | 3.13 | 5000 | 4.3718 | | 3.8934 | 3.44 | 5500 | 4.3419 | | 3.8694 | 3.75 | 6000 | 4.3086 | | 3.7894 | 4.07 | 6500 | 4.2941 | | 3.5908 | 4.38 | 7000 | 4.2908 | | 3.586 | 4.69 | 7500 | 4.2727 | | 3.5713 | 5.01 | 8000 | 4.2605 | | 3.3959 | 5.32 | 8500 | 4.2717 | | 3.3922 | 5.63 | 9000 | 4.2700 | | 3.3874 | 5.94 | 9500 | 4.2690 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
LoupGarou/WizardCoder-Guanaco-15B-V1.1
LoupGarou
2023-07-13T03:21:55Z
1,506
12
transformers
[ "transformers", "pytorch", "gpt_bigcode", "text-generation", "en", "dataset:guanaco", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-12T06:10:19Z
--- language: - en datasets: - guanaco model_hub_library: - transformers license: - apache-2.0 --- ## WizardCoder-Guanaco-15B-V1.1 Model Card The WizardCoder-Guanaco-15B-V1.1 is a language model that combines the strengths of the [WizardCoder](https://huggingface.co/WizardLM/WizardCoder-15B-V1.0) base model and the [openassistant-guanaco](https://huggingface.co/datasets/timdettmers/openassistant-guanaco) dataset for finetuning. The openassistant-guanaco dataset was further trimmed to within 2 standard deviations of token size for input and output pairs and all non-english data has been removed to reduce training size requirements. Version 1.1 showcases notable enhancements, employing a modified version of the previous openassistant-guanaco dataset. This dataset underwent a comprehensive revision, replacing every single answer with those generated by GPT-4. The volume of the datasets has also been augmented by approximately 50%, with a particular focus on high school and abstract algebra. This expansion leveraged the combined capabilities of GPT-4 and GPT-3.5-Turbo. The initial evaluation of algebraic functions over 12 epochs indicated promising results from this enriched dataset. However, this is just the beginning; further refinements are in the pipeline, aiming to optimize the dataset quality and subsequently decrease the number of epochs required to achieve comparable results. Considering the need to curtail memory consumption during training, this dataset was tailored to consist solely of English language questions and answers. Consequently, the model's performance in language translation may not be up to par. Nevertheless, the focus remains on enhancing the model's proficiency and efficiency within its defined scope. # Intended Use This model is designed to be used for a wide array of text generation tasks that require understanding and generating English text. The model is expected to perform well in tasks such as answering questions, writing essays, summarizing text, translation, and more. However, given the specific data processing and finetuning done, it might be particularly effective for tasks related to English language question-answering systems. # Limitations Despite the powerful capabilities of this model, users should be aware of its limitations. The model's knowledge is up to date only until the time it was trained, and it doesn't know about events in the world after that. It can sometimes produce incorrect or nonsensical responses, as it doesn't understand the text in the same way humans do. It should be used as a tool to assist in generating text and not as a sole source of truth. # How to use Here is an example of how to use this model: ```python from transformers import AutoModelForCausalLM, AutoTokenizer import time import torch class Chatbot: def __init__(self, model_name): self.tokenizer = AutoTokenizer.from_pretrained(model_name, padding_side='left') self.model = AutoModelForCausalLM.from_pretrained(model_name, load_in_4bit=True, torch_dtype=torch.bfloat16) if self.tokenizer.pad_token_id is None: self.tokenizer.pad_token_id = self.tokenizer.eos_token_id def get_response(self, prompt): inputs = self.tokenizer.encode_plus(prompt, return_tensors="pt", padding='max_length', max_length=100) if next(self.model.parameters()).is_cuda: inputs = {name: tensor.to('cuda') for name, tensor in inputs.items()} start_time = time.time() tokens = self.model.generate(input_ids=inputs['input_ids'], attention_mask=inputs['attention_mask'], pad_token_id=self.tokenizer.pad_token_id, max_new_tokens=400) end_time = time.time() output_tokens = tokens[0][inputs['input_ids'].shape[-1]:] output = self.tokenizer.decode(output_tokens, skip_special_tokens=True) time_taken = end_time - start_time return output, time_taken def main(): chatbot = Chatbot("LoupGarou/WizardCoder-Guanaco-15B-V1.1") while True: user_input = input("Enter your prompt: ") if user_input.lower() == 'quit': break output, time_taken = chatbot.get_response(user_input) print("\033[33m" + output + "\033[0m") print("Time taken to process: ", time_taken, "seconds") print("Exited the program.") if __name__ == "__main__": main() ``` # Training Procedure The WizardCoder model, serving as the base, was fine-tuned on a modified version of the openassistant-guanaco dataset. This dataset underwent a significant revision, replacing every single answer with responses generated by the AI model GPT-4. It was then expanded by approximately 50%, emphasizing high school and abstract algebra-related questions, using a mix of GPT-4 and GPT-3.5-Turbo for answer generation. The selected dataset was standardized to fall within two standard deviations of token size for the question sets, ensuring consistency in data handling. The order of the questions was also randomized to mitigate any potential biases during the training phase. In the interest of optimizing memory usage during the training process, the dataset was streamlined to only include English language content. As a result, all non-English data was systematically expunged from this fine-tuning dataset. It's worth noting that this modification limits the model's performance in language translation tasks, but it significantly boosts its efficiency and effectiveness when dealing with English language questions and answers. ## Acknowledgements This model, WizardCoder-Guanaco-15B-V1.1, is simply building on the efforts of two great teams to evaluate the performance of a combined model with the strengths of the [WizardCoder base model](https://huggingface.co/WizardLM/WizardCoder-15B-V1.0) and the [openassistant-guanaco dataset](https://huggingface.co/datasets/timdettmers/openassistant-guanaco). A sincere appreciation goes out to the developers and the community involved in the creation and refinement of these models. Their commitment to providing open source tools and datasets have been instrumental in making this project a reality. Moreover, a special note of thanks to the [Hugging Face](https://huggingface.co/) team, whose transformative library has not only streamlined the process of model creation and adaptation, but also democratized the access to state-of-the-art machine learning technologies. Their impact on the development of this project cannot be overstated.
sd-dreambooth-library/this-youtuber-does-not-exist
sd-dreambooth-library
2023-07-13T03:12:53Z
32
2
diffusers
[ "diffusers", "tensorboard", "text-to-image", "en", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-02-03T21:50:06Z
--- license: creativeml-openrail-m tags: - text-to-image widget: - text: tyznedsk1 language: - en library_name: diffusers pipeline_tag: text-to-image --- ### This Youtuber Does Not Exist Dreambooth model trained with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! WELCOME TO THE INTERNET: # THIS YOUTUBER DOES NOT EXIST # NOR DO YOU # RED , PINK OR BLUE OR GREEN OR YELLOW M&M PLS tyznedsk1 (use that on your prompt)
h2oai/h2ogpt-gm-oasst1-en-2048-falcon-40b-v2
h2oai
2023-07-13T03:12:11Z
72
18
transformers
[ "transformers", "pytorch", "RefinedWeb", "text-generation", "gpt", "llm", "large language model", "h2o-llmstudio", "custom_code", "en", "dataset:OpenAssistant/oasst1", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-06-23T07:35:02Z
--- language: - en library_name: transformers tags: - gpt - llm - large language model - h2o-llmstudio inference: false thumbnail: >- https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico license: apache-2.0 datasets: - OpenAssistant/oasst1 --- # Model Card ## Summary This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio). - Base model: [tiiuae/falcon-40b](https://huggingface.co/tiiuae/falcon-40b) - Dataset preparation: [OpenAssistant/oasst1](https://github.com/h2oai/h2o-llmstudio/blob/1935d84d9caafed3ee686ad2733eb02d2abfce57/app_utils/utils.py#LL1896C5-L1896C28) personalized ## Usage To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers`, `accelerate` and `torch` libraries installed. ```bash pip install transformers==4.29.2 pip install bitsandbytes==0.39.0 pip install accelerate==0.19.0 pip install torch==2.0.0 pip install einops==0.6.1 ``` ```python import torch from transformers import pipeline, BitsAndBytesConfig, AutoTokenizer model_kwargs = {} quantization_config = None # optional quantization quantization_config = BitsAndBytesConfig( load_in_8bit=True, llm_int8_threshold=6.0, ) model_kwargs["quantization_config"] = quantization_config tokenizer = AutoTokenizer.from_pretrained( "h2oai/h2ogpt-gm-oasst1-en-2048-falcon-40b-v2", use_fast=False, padding_side="left", trust_remote_code=True, ) generate_text = pipeline( model="h2oai/h2ogpt-gm-oasst1-en-2048-falcon-40b-v2", tokenizer=tokenizer, torch_dtype=torch.float16, trust_remote_code=True, use_fast=False, device_map={"": "cuda:0"}, model_kwargs=model_kwargs, ) res = generate_text( "Why is drinking water so healthy?", min_new_tokens=2, max_new_tokens=1024, do_sample=False, num_beams=1, temperature=float(0.3), repetition_penalty=float(1.2), renormalize_logits=True ) print(res[0]["generated_text"]) ``` You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer: ```python print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"]) ``` ```bash <|prompt|>Why is drinking water so healthy?<|endoftext|><|answer|> ``` Alternatively, you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer: ```python import torch from h2oai_pipeline import H2OTextGenerationPipeline from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig quantization_config = None # optional quantization quantization_config = BitsAndBytesConfig( load_in_8bit=True, llm_int8_threshold=6.0, ) tokenizer = AutoTokenizer.from_pretrained( "h2oai/h2ogpt-gm-oasst1-en-2048-falcon-40b-v2", use_fast=False, padding_side="left", trust_remote_code=True, ) model = AutoModelForCausalLM.from_pretrained( "h2oai/h2ogpt-gm-oasst1-en-2048-falcon-40b-v2", trust_remote_code=True, torch_dtype=torch.float16, device_map={"": "cuda:0"}, quantization_config=quantization_config ).eval() generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer) res = generate_text( "Why is drinking water so healthy?", min_new_tokens=2, max_new_tokens=1024, do_sample=False, num_beams=1, temperature=float(0.3), repetition_penalty=float(1.2), renormalize_logits=True ) print(res[0]["generated_text"]) ``` You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps: ```python from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig # Important: The prompt needs to be in the same format the model was trained with. # You can find an example prompt in the experiment logs. prompt = "<|prompt|>How are you?<|endoftext|><|answer|>" quantization_config = None # optional quantization quantization_config = BitsAndBytesConfig( load_in_8bit=True, llm_int8_threshold=6.0, ) tokenizer = AutoTokenizer.from_pretrained( "h2oai/h2ogpt-gm-oasst1-en-2048-falcon-40b-v2", use_fast=False, padding_side="left", trust_remote_code=True, ) model = AutoModelForCausalLM.from_pretrained( "h2oai/h2ogpt-gm-oasst1-en-2048-falcon-40b-v2", trust_remote_code=True, torch_dtype=torch.float16, device_map={"": "cuda:0"}, quantization_config=quantization_config ).eval() inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda") # generate configuration can be modified to your needs tokens = model.generate( **inputs, min_new_tokens=2, max_new_tokens=1024, do_sample=False, num_beams=1, temperature=float(0.3), repetition_penalty=float(1.2), renormalize_logits=True )[0] tokens = tokens[inputs["input_ids"].shape[1]:] answer = tokenizer.decode(tokens, skip_special_tokens=True) print(answer) ``` ## Model Architecture ``` RWForCausalLM( (transformer): RWModel( (word_embeddings): Embedding(65024, 8192) (h): ModuleList( (0-59): 60 x DecoderLayer( (ln_attn): LayerNorm((8192,), eps=1e-05, elementwise_affine=True) (ln_mlp): LayerNorm((8192,), eps=1e-05, elementwise_affine=True) (self_attention): Attention( (maybe_rotary): RotaryEmbedding() (query_key_value): Linear(in_features=8192, out_features=9216, bias=False) (dense): Linear(in_features=8192, out_features=8192, bias=False) (attention_dropout): Dropout(p=0.0, inplace=False) ) (mlp): MLP( (dense_h_to_4h): Linear(in_features=8192, out_features=32768, bias=False) (act): GELU(approximate='none') (dense_4h_to_h): Linear(in_features=32768, out_features=8192, bias=False) ) ) ) (ln_f): LayerNorm((8192,), eps=1e-05, elementwise_affine=True) ) (lm_head): Linear(in_features=8192, out_features=65024, bias=False) ) ``` ## Model Configuration This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models. ## Disclaimer Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions. - Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints. - Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion. - Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model. - Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities. - Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues. - Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes. By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
DeeeTeeee01/mytest_trainer_roberta
DeeeTeeee01
2023-07-13T03:05:52Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-13T02:27:31Z
--- tags: - generated_from_trainer model-index: - name: mytest_trainer_roberta results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mytest_trainer_roberta This model is a fine-tuned version of [cardiffnlp/twitter-xlm-roberta-base-sentiment](https://huggingface.co/cardiffnlp/twitter-xlm-roberta-base-sentiment) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.8617 - Rmse: 0.6928 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rmse | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.7365 | 1.0 | 500 | 0.6992 | 0.7543 | | 0.6079 | 2.0 | 1000 | 0.6532 | 0.6841 | | 0.4798 | 3.0 | 1500 | 0.7034 | 0.6823 | | 0.3451 | 4.0 | 2000 | 0.7757 | 0.6925 | | 0.256 | 5.0 | 2500 | 1.0959 | 0.7266 | | 0.1818 | 6.0 | 3000 | 1.2213 | 0.6775 | | 0.1407 | 7.0 | 3500 | 1.4863 | 0.6764 | | 0.0938 | 8.0 | 4000 | 1.7213 | 0.7032 | | 0.0623 | 9.0 | 4500 | 1.8237 | 0.6917 | | 0.0484 | 10.0 | 5000 | 1.8617 | 0.6928 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
sd-dreambooth-library/HairDye
sd-dreambooth-library
2023-07-13T03:05:23Z
31
0
diffusers
[ "diffusers", "tensorboard", "text-to-image", "en", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-02-03T01:16:40Z
--- license: creativeml-openrail-m tags: - text-to-image widget: - text: daizky1 language: - en library_name: diffusers pipeline_tag: text-to-image --- ### Hair Dye Dreambooth model [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). ### TERMS OF SERVICE: # No Selling Models # Merge with CREDIT VAE is not required but is fun. I am not responsible for what you make. If this model bites you call the CIA. ### Codeword: daizky1 (use that on your prompt)
Hedayat-Abrishami/rl_course_vizdoom_health_gathering_supreme
Hedayat-Abrishami
2023-07-13T02:54:34Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-13T01:42:33Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 12.83 +/- 5.92 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r Hedayat-Abrishami/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
yuean/my_resnet50_model
yuean
2023-07-13T02:41:43Z
249
0
transformers
[ "transformers", "pytorch", "resnet", "image-classification", "dataset:yuean/EuroSAT-2750", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-07-12T05:55:17Z
--- metrics: - accuracy pipeline_tag: image-classification datasets: - yuean/EuroSAT-2750 ---
Jonathaniu/alpaca-breast-cancer-7b-epoch-2
Jonathaniu
2023-07-13T02:19:50Z
4
0
peft
[ "peft", "region:us" ]
null
2023-07-13T02:19:33Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False ### Framework versions - PEFT 0.4.0.dev0
hululuzhu/solidity-t5
hululuzhu
2023-07-13T00:59:53Z
118
10
transformers
[ "transformers", "pytorch", "safetensors", "t5", "text2text-generation", "solidity", "web3", "code generation", "smart contract", "en", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-01-01T02:23:20Z
--- language: - en license: apache-2.0 tags: - solidity - web3 - code generation - smart contract widget: - text: "pragma solidity ^0.5.7;\n// Context: ParentA | Functions: helloA helloB | Constants: constantA \ncontract HelloWorld is ParentA {" --- # A code generation T5 model for solidity (web3 smart contract) - See https://github.com/hululuzhu/solidity-t5 for more context ## How to use this trained model - A hello world example to use this model, notice the input `text` includes - Header solidity version like `pragma solidity ^0.5.7` - Ancestor class/library info, e.g. public functions and constants from `ParentA` - Contract/Library/Interface declaration header, e.g. `HelloWorld` ended with `{` - Or simply use the test widget on the right side of the window and test, however the quality is known to be worse without decoding params ```python # !pip install transformers -q from transformers import AutoTokenizer, T5ForConditionalGeneration DEVICE = 'cuda' # fallback to cpu if you do not have cuda tokenizer = AutoTokenizer.from_pretrained("hululuzhu/solidity-t5") model = T5ForConditionalGeneration.from_pretrained("hululuzhu/solidity-t5").to(DEVICE) text = """pragma solidity ^0.5.7; // Context: ParentA | Functions: helloA helloB | Constants: constantA contract HelloWorld is ParentA {""" input_ids = tokenizer(text, return_tensors="pt", truncation=True).input_ids.to(DEVICE) # Need to tune beam/topk/topp params to get good outcome generated_ids = model.generate(input_ids, max_length=256, num_beams=5, top_p=0.95, top_k=50) print(tokenizer.decode(generated_ids[0], skip_special_tokens=True)) # Expect outcome """ string public constant name = "Hello World"; ... uint256 public constant override returns (uint256) { return initialSupply; } function initialSupply() public view returns (uint256) { ... """ ``` ## Background - Base T5 code model: https://huggingface.co/Salesforce/codet5-large - Source data: https://huggingface.co/datasets/mwritescode/slither-audited-smart-contracts - Processing steps: Clean, contract-level segmentation sepration, split in and out - After processing input sample ``` pragma solidity 0.5.7; // Context: PauserRole | Functions: isPauser addPauser renouncePauser | Constants: contract Pausable is PauserRole { ``` - After processing output sample (**notice indentation is bad, this is intentional to reduce token size**) ``` event Paused(address account); event Unpaused(address account); bool private _pausableActive; bool private _paused; constructor () internal { _paused = false; } function paused() public view returns (bool) { return _paused; } modifier whenNotPaused() { require(!_paused); _; } modifier whenPaused() { require(_paused); _; } function pause() public onlyPauser whenNotPaused whenPausableActive { _paused = true; emit Paused(msg.sender); } function unpause() public onlyPauser whenPaused whenPausableActive { _paused = false; emit Unpaused(msg.sender); } function _setPausableActive(bool _active) internal { _pausableActive = _active; } modifier whenPausableActive() { require(_pausableActive); _; } } ``` - Source training code: See the [end to end notebook](https://github.com/hululuzhu/solidity-t5/blob/main/code/Solidity_T5_Data_Processing_and_Training.ipynb) at code dir here ## Future TODO - The model is significantly under-trained because of lack of GPU budget, need 10x colab resources (~$100 for full train) - This is quite limited on how the model is used, potentially we could switch to GPT2 decoder-only to compare, but CodeT5 has its strong code optimization - Need more classifiers (T5 or BERT alike) to detect potential defects.
tbooy/Taxi-v3
tbooy
2023-07-13T00:58:52Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-13T00:58:41Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.52 +/- 2.73 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="tbooy/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
tbooy/Taxi
tbooy
2023-07-13T00:58:17Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-13T00:57:59Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.52 +/- 2.73 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="tbooy/Taxi", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
mpinedaa/distilbert_squad_sample_finetuned_model
mpinedaa
2023-07-13T00:30:09Z
103
0
transformers
[ "transformers", "pytorch", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-07-12T14:14:48Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: distilbert_squad_sample_finetuned_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_squad_sample_finetuned_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.5925 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 250 | 2.3529 | | 2.7741 | 2.0 | 500 | 1.6561 | | 2.7741 | 3.0 | 750 | 1.5925 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cpu - Datasets 2.13.1 - Tokenizers 0.13.3
manmyung/dqn-SpaceInvadersNoFrameskip-v4
manmyung
2023-07-13T00:08:57Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-13T00:08:12Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 613.50 +/- 78.77 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga manmyung -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga manmyung -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga manmyung ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
Hedayat-Abrishami/ppo-CartPole-v1
Hedayat-Abrishami
2023-07-12T23:58:20Z
0
0
null
[ "tensorboard", "CartPole-v1", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
2023-07-12T23:51:42Z
--- tags: - CartPole-v1 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 223.00 +/- 113.45 name: mean_reward verified: false --- # PPO Agent Playing CartPole-v1 This is a trained model of a PPO agent playing CartPole-v1. # Hyperparameters ```python {'exp_name': 'Name' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'CartPole-v1' 'total_timesteps': 50000 'learning_rate': 0.00025 'num_envs': 4 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'Hedayat-Abrishami/ppo-CartPole-v1' 'batch_size': 512 'minibatch_size': 128} ```
ramymohamed/a2c-AntBulletEnv-v0
ramymohamed
2023-07-12T23:55:29Z
0
0
stable-baselines3
[ "stable-baselines3", "AntBulletEnv-v0", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-12T23:54:08Z
--- library_name: stable-baselines3 tags: - AntBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: AntBulletEnv-v0 type: AntBulletEnv-v0 metrics: - type: mean_reward value: 1714.78 +/- 104.09 name: mean_reward verified: false --- # **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Blu72/Falcon
Blu72
2023-07-12T23:48:51Z
0
0
null
[ "license:openrail", "region:us" ]
null
2023-07-12T23:48:30Z
--- license: openrail --- # Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("tiiuae/falcon-40b") model = AutoModelForCausalLM.from_pretrained("tiiuae/falcon-40b")
Isotonic/informal_to_formal
Isotonic
2023-07-12T22:55:28Z
111
0
transformers
[ "transformers", "pytorch", "safetensors", "t5", "text2text-generation", "style-transfer", "seq2seq", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-11-01T05:59:36Z
--- language: "en" tags: - style-transfer - text2text-generation - seq2seq inference: true --- ​ # Formality Style Transfer ## Model description​ T5 Model for Formality Style Transfer. Trained on the GYAFC dataset.​ ## How to use ​PyTorch model available​. ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM ​ tokenizer = AutoTokenizer.from_pretrained("Isotonic/informal_to_formal") model = AutoModelForSeq2SeqLM.from_pretrained("Isotonic/informal_to_formal") ​ sentence = "will you look into these two deals and let me know" text = "Make the following sentence Formal: " + sentence + " </s>" encoding = tokenizer.encode_plus(text,pad_to_max_length=True, return_tensors="pt") input_ids, attention_masks = encoding["input_ids"].to("cuda"), encoding["attention_mask"].to("cuda") outputs = model.generate( input_ids=input_ids, attention_mask=attention_masks, max_length=256, do_sample=True, top_k=120, top_p=0.95, early_stopping=True, num_return_sequences=5 ) for output in outputs: line = tokenizer.decode(output, skip_special_tokens=True,clean_up_tokenization_spaces=True) print(line) ​Output: "Would you look into the two deals in question, then let me know?" ```
Peebranco/teste-pedro-branco
Peebranco
2023-07-12T22:53:38Z
0
0
null
[ "pt", "en", "dataset:Open-Orca/OpenOrca", "region:us" ]
null
2023-07-12T22:52:52Z
--- datasets: - Open-Orca/OpenOrca language: - pt - en metrics: - character ---
digiplay/FumizukiMix_v1
digiplay
2023-07-12T22:49:15Z
329
3
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-12T22:33:07Z
--- license: other tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- Model info: https://civitai.com/models/107380/fumizukimix ![Screenshot_20230713_063348_Vivaldi Browser Snapshot.jpg](https://cdn-uploads.huggingface.co/production/uploads/646c83c871d0c8a6e4455854/yPC1NrFq_TXt3njI6DM4f.jpeg)
KingShmeeky/KingshmeekyRVC
KingShmeeky
2023-07-12T22:43:21Z
0
0
null
[ "music", "en", "license:openrail", "region:us" ]
null
2023-07-12T22:30:27Z
--- license: openrail language: - en tags: - music ---
lovelyxs/Pyramids
lovelyxs
2023-07-12T22:37:03Z
3
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2023-07-12T22:36:58Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: lovelyxs/Pyramids 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
whywynn/q-Taxi-v3
whywynn
2023-07-12T22:34:05Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-12T21:52:10Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="whywynn/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
NasimB/gpt2-concat-cbt-mod-formatting-rarity-all-no-cut-rev
NasimB
2023-07-12T22:26:31Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-12T20:28:20Z
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: gpt2-concat-cbt-mod-formatting-rarity-all-no-cut-rev results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-concat-cbt-mod-formatting-rarity-all-no-cut-rev This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 4.3397 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 6.68 | 0.29 | 500 | 5.6382 | | 5.326 | 0.59 | 1000 | 5.2100 | | 4.9873 | 0.88 | 1500 | 4.9727 | | 4.72 | 1.18 | 2000 | 4.8271 | | 4.5629 | 1.47 | 2500 | 4.7084 | | 4.468 | 1.76 | 3000 | 4.6087 | | 4.3349 | 2.06 | 3500 | 4.5351 | | 4.1534 | 2.35 | 4000 | 4.4838 | | 4.1205 | 2.65 | 4500 | 4.4211 | | 4.0865 | 2.94 | 5000 | 4.3663 | | 3.8691 | 3.24 | 5500 | 4.3627 | | 3.8207 | 3.53 | 6000 | 4.3272 | | 3.8 | 3.82 | 6500 | 4.2943 | | 3.6899 | 4.12 | 7000 | 4.2964 | | 3.5382 | 4.41 | 7500 | 4.2861 | | 3.5296 | 4.71 | 8000 | 4.2710 | | 3.5189 | 5.0 | 8500 | 4.2564 | | 3.3408 | 5.29 | 9000 | 4.2730 | | 3.3436 | 5.59 | 9500 | 4.2712 | | 3.3413 | 5.88 | 10000 | 4.2705 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
uribah/my_awesome_model_2
uribah
2023-07-12T21:59:11Z
62
0
transformers
[ "transformers", "tf", "distilbert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-12T21:27:51Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: uribah/my_awesome_model_2 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # uribah/my_awesome_model_2 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.4528 - Validation Loss: 0.2263 - Train Accuracy: 0.9160 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1470, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 0.4528 | 0.2263 | 0.9160 | 0 | ### Framework versions - Transformers 4.30.2 - TensorFlow 2.12.0 - Datasets 2.13.1 - Tokenizers 0.13.3
lovelyxs/ppo-SnowballTarget
lovelyxs
2023-07-12T21:43:03Z
9
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2023-07-12T21:42:55Z
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: lovelyxs/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
S1X3L4/Reinforce-cartpole0
S1X3L4
2023-07-12T21:36:17Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-07-12T21:36:00Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-cartpole0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
tiiuae/falcon-rw-1b
tiiuae
2023-07-12T21:34:11Z
25,115
104
transformers
[ "transformers", "pytorch", "falcon", "text-generation", "custom_code", "en", "dataset:tiiuae/falcon-refinedweb", "arxiv:2306.01116", "arxiv:2005.14165", "arxiv:2108.12409", "arxiv:2205.14135", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-04-26T09:25:36Z
--- datasets: - tiiuae/falcon-refinedweb language: - en inference: false license: apache-2.0 --- # Falcon-RW-1B **Falcon-RW-1B is a 1B parameters causal decoder-only model built by [TII](https://www.tii.ae) and trained on 350B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb). It is made available under the Apache 2.0 license.** See the 📓 [paper on arXiv](https://arxiv.org/abs/2306.01116) for more details. RefinedWeb is a high-quality web dataset built by leveraging stringent filtering and large-scale deduplication. Falcon-RW-1B, trained on RefinedWeb only, matches or outperforms comparable models trained on curated data. ⚠️ Falcon is now available as a core model in the `transformers` library! To use the in-library version, please install the latest version of `transformers` with `pip install git+https://github.com/huggingface/transformers.git`, then simply remove the `trust_remote_code=True` argument from `from_pretrained()`. ⚠️ This model is intended for use as a **research artifact**, to study the influence of training on web data alone. **If you are interested in state-of-the-art models, we recommend using Falcon-[7B](https://huggingface.co/tiiuae/falcon-7b)/[40B](https://huggingface.co/tiiuae/falcon-40b), both trained on >1,000 billion tokens.** ```python from transformers import AutoTokenizer, AutoModelForCausalLM import transformers import torch model = "tiiuae/falcon-rw-1b" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, tokenizer=tokenizer, torch_dtype=torch.bfloat16, device_map="auto", ) sequences = pipeline( "Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:", max_length=200, do_sample=True, top_k=10, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id, ) for seq in sequences: print(f"Result: {seq['generated_text']}") ``` 💥 **Falcon LLMs require PyTorch 2.0 for use with `transformers`!** # Model Card for Falcon-RW-1B ## Model Details ### Model Description - **Developed by:** [https://www.tii.ae](https://www.tii.ae); - **Model type:** Causal decoder-only; - **Language(s) (NLP):** English; - **License:** Apache 2.0. ### Model Source - **Paper:** [https://arxiv.org/abs/2306.01116](https://arxiv.org/abs/2306.01116). ## Uses ### Direct Use Research on large language models, specifically the influence of adequately filtered and deduplicated web data on the properties of large language models (fairness, safety, limitations, capabilities, etc.). ### Out-of-Scope Use Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful. Broadly speaking, we would recommend Falcon-[7B](https://huggingface.co/tiiuae/falcon-7b)/[40B](https://huggingface.co/tiiuae/falcon-40b) for any use not directly related to research on web data pipelines. ## Bias, Risks, and Limitations Falcon-RW-1B is trained on English data only, and will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online. ### Recommendations We recommend users of Falcon-RW-1B to consider finetuning it for the specific set of tasks of interest, and for guardrails and appropriate precautions to be taken for any production use. ## How to Get Started with the Model ```python from transformers import AutoTokenizer, AutoModelForCausalLM import transformers import torch model = "tiiuae/falcon-rw-1b" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, tokenizer=tokenizer, torch_dtype=torch.bfloat16, device_map="auto", ) sequences = pipeline( "Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:", max_length=200, do_sample=True, top_k=10, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id, ) for seq in sequences: print(f"Result: {seq['generated_text']}") ``` ## Training Details ### Training Data Falcon-RW-1B was trained on 350B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), a high-quality filtered and deduplicated web dataset. The data was tokenized with the GPT-2 tokenizer. ### Training Procedure Falcon-RW-1B was trained on 32 A100 40GB GPUs, using only data parallelism with ZeRO. #### Training Hyperparameters Hyperparameters were adapted from the GPT-3 paper ([Brown et al., 2020](https://arxiv.org/abs/2005.14165)). | **Hyperparameter** | **Value** | **Comment** | |--------------------|------------|-------------------------------------------| | Precision | `bfloat16` | | | Optimizer | AdamW | | | Learning rate | 2e-4 | 500M tokens warm-up, cosine decay to 2e-5 | | Weight decay | 1e-1 | | | Batch size | 512 | 4B tokens ramp-up | #### Speeds, Sizes, Times Training happened in early December 2022 and took about six days. ## Evaluation See the 📓 [paper on arXiv](https://arxiv.org/abs/2306.01116) for in-depth evaluation. ## Technical Specifications ### Model Architecture and Objective Falcon-RW-1B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token). The architecture is adapted from the GPT-3 paper ([Brown et al., 2020](https://arxiv.org/abs/2005.14165)), but uses ALiBi ([Ofir et al., 2021](https://arxiv.org/abs/2108.12409)) and FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135)). | **Hyperparameter** | **Value** | **Comment** | |--------------------|-----------|----------------------------------------| | Layers | 24 | | | `d_model` | 2048 | | | `head_dim` | 64 | Reduced to optimise for FlashAttention | | Vocabulary | 50304 | | | Sequence length | 2048 | | ### Compute Infrastructure #### Hardware Falcon-RW-1B was trained on AWS SageMaker, on 32 A100 40GB GPUs in P4d instances. #### Software Falcon-RW-1B was trained a custom distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO and high-performance Triton kernels (FlashAttention, etc.) ## Citation ``` @article{refinedweb, title={The {R}efined{W}eb dataset for {F}alcon {LLM}: outperforming curated corpora with web data, and web data only}, author={Guilherme Penedo and Quentin Malartic and Daniel Hesslow and Ruxandra Cojocaru and Alessandro Cappelli and Hamza Alobeidli and Baptiste Pannier and Ebtesam Almazrouei and Julien Launay}, journal={arXiv preprint arXiv:2306.01116}, eprint={2306.01116}, eprinttype = {arXiv}, url={https://arxiv.org/abs/2306.01116}, year={2023} } ``` ## Contact [email protected]
yanex0/cn-v1-1
yanex0
2023-07-12T21:20:20Z
0
0
null
[ "license:openrail", "region:us" ]
null
2023-07-12T21:14:00Z
--- license: openrail --- This is the model files for [ControlNet 1.1](https://github.com/lllyasviel/ControlNet-v1-1-nightly). This model card will be filled in a more detailed way after 1.1 is officially merged into ControlNet.
SrPrieto/ppo-LunarLander-v2
SrPrieto
2023-07-12T21:14:49Z
5
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-12T21:14:30Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 271.18 +/- 13.06 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
carbon225/byt5-abbreviations-pl
carbon225
2023-07-12T21:00:28Z
104
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "pl", "dataset:carbon225/poleval-abbreviation-disambiguation-wiki", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-07-09T21:40:24Z
--- datasets: - carbon225/poleval-abbreviation-disambiguation-wiki language: - pl widget: - text: "Kolejne 0,12 <mask>pkt. proc.</mask> wynika ze spadku popytu na polski eksport, a 0,08 z zaburzeń na rynku wewnętrznym" example_title: "Example 1" --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
foreverip/q-Taxi-v3
foreverip
2023-07-12T20:56:40Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-12T20:56:37Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="foreverip/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
saeedehj/led-base-finetune-xsum
saeedehj
2023-07-12T20:52:30Z
95
0
transformers
[ "transformers", "pytorch", "tensorboard", "led", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-07-12T16:21:51Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - rouge model-index: - name: led-base-16384-finetune-xsum results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # led-base-16384-finetune-xsum This model is a fine-tuned version of [allenai/led-base-16384](https://huggingface.co/allenai/led-base-16384) on the xsum dataset. It achieves the following results on the evaluation set: - Loss: 3.3325 - Rouge1: 31.3157 - Rouge2: 9.2183 - Rougel: 23.7641 - Rougelsum: 23.8202 - Gen Len: 19.89 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | No log | 1.0 | 125 | 2.6311 | 32.5653 | 10.8601 | 25.3811 | 25.5187 | 19.84 | | No log | 2.0 | 250 | 2.7544 | 31.6321 | 9.9595 | 25.0264 | 25.0779 | 19.85 | | No log | 3.0 | 375 | 2.8261 | 32.0246 | 10.1415 | 25.2121 | 25.2632 | 19.89 | | 0.1515 | 4.0 | 500 | 2.9240 | 31.6961 | 11.1892 | 25.0684 | 25.1019 | 19.92 | | 0.1515 | 5.0 | 625 | 3.0229 | 31.1022 | 9.294 | 24.3075 | 24.309 | 19.9 | | 0.1515 | 6.0 | 750 | 3.0900 | 31.7063 | 10.2344 | 25.1885 | 25.3359 | 19.89 | | 0.1515 | 7.0 | 875 | 3.0958 | 31.6973 | 10.2856 | 25.5433 | 25.6242 | 19.91 | | 0.0437 | 8.0 | 1000 | 3.1248 | 30.9445 | 10.3904 | 24.0861 | 24.109 | 19.91 | | 0.0437 | 9.0 | 1125 | 3.1542 | 31.4694 | 9.4087 | 24.3248 | 24.4039 | 19.97 | | 0.0437 | 10.0 | 1250 | 3.1986 | 30.428 | 9.6657 | 24.2568 | 24.4035 | 19.86 | | 0.0437 | 11.0 | 1375 | 3.2040 | 32.3325 | 9.8754 | 25.117 | 25.1563 | 19.95 | | 0.0229 | 12.0 | 1500 | 3.2044 | 30.8435 | 8.6959 | 23.4129 | 23.5211 | 19.99 | | 0.0229 | 13.0 | 1625 | 3.2419 | 31.8807 | 9.6734 | 24.5748 | 24.6672 | 19.96 | | 0.0229 | 14.0 | 1750 | 3.2926 | 31.8181 | 9.5238 | 24.3606 | 24.4569 | 19.88 | | 0.0229 | 15.0 | 1875 | 3.2935 | 30.7551 | 8.9042 | 23.9581 | 24.1074 | 19.98 | | 0.0107 | 16.0 | 2000 | 3.3219 | 31.3919 | 9.3308 | 24.1432 | 24.2162 | 19.93 | | 0.0107 | 17.0 | 2125 | 3.3167 | 31.7918 | 9.4813 | 23.9672 | 24.0244 | 19.9 | | 0.0107 | 18.0 | 2250 | 3.3281 | 31.0624 | 9.3608 | 23.6247 | 23.6658 | 19.89 | | 0.0107 | 19.0 | 2375 | 3.3248 | 31.7832 | 9.5257 | 23.9738 | 24.0255 | 19.96 | | 0.0063 | 20.0 | 2500 | 3.3325 | 31.3157 | 9.2183 | 23.7641 | 23.8202 | 19.89 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
foreverip/q-FrozenLake-v1-4x4-noSlippery
foreverip
2023-07-12T20:49:59Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-12T20:49:56Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="foreverip/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
kimnguyenwork/Taxi-v3
kimnguyenwork
2023-07-12T20:45:36Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-12T20:45:29Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.52 +/- 2.73 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="kimnguyenwork/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Sanyam0605/whisper-large-v2-hi
Sanyam0605
2023-07-12T20:39:50Z
5
1
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "hi", "dataset:google/fleurs", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-07-10T20:07:11Z
--- datasets: - google/fleurs metrics: - accuracy/wer license: apache-2.0 language: - hi library_name: transformers ---
NasimB/gpt2-concat-guten-rarity-no-cut
NasimB
2023-07-12T20:33:38Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-12T18:48:47Z
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: gpt2-concat-guten-rarity-no-cut results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-concat-guten-rarity-no-cut This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 4.3296 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 6.6869 | 0.29 | 500 | 5.6385 | | 5.3235 | 0.59 | 1000 | 5.2015 | | 4.9865 | 0.88 | 1500 | 4.9498 | | 4.7068 | 1.18 | 2000 | 4.8080 | | 4.5674 | 1.47 | 2500 | 4.6941 | | 4.4601 | 1.76 | 3000 | 4.5872 | | 4.3293 | 2.06 | 3500 | 4.5155 | | 4.1497 | 2.35 | 4000 | 4.4676 | | 4.1182 | 2.64 | 4500 | 4.4072 | | 4.0826 | 2.94 | 5000 | 4.3514 | | 3.8664 | 3.23 | 5500 | 4.3488 | | 3.8272 | 3.53 | 6000 | 4.3168 | | 3.8034 | 3.82 | 6500 | 4.2843 | | 3.6795 | 4.11 | 7000 | 4.2836 | | 3.5333 | 4.41 | 7500 | 4.2764 | | 3.534 | 4.7 | 8000 | 4.2603 | | 3.5182 | 4.99 | 8500 | 4.2478 | | 3.3437 | 5.29 | 9000 | 4.2620 | | 3.3384 | 5.58 | 9500 | 4.2601 | | 3.3385 | 5.88 | 10000 | 4.2595 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
Jonathaniu/alpaca-breast-cancer-7b
Jonathaniu
2023-07-12T20:29:18Z
2
0
peft
[ "peft", "region:us" ]
null
2023-07-11T18:18:27Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False ### Framework versions - PEFT 0.4.0.dev0
odunola/sentence-transformers-bible-reference-final
odunola
2023-07-12T20:20:22Z
14
4
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "feature-extraction", "transformers", "dataset:odunola/bible-reference-sentence-pair", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
feature-extraction
2023-06-18T23:33:09Z
--- pipeline_tag: feature-extraction tags: - sentence-transformers - feature-extraction - transformers license: apache-2.0 datasets: - odunola/bible-reference-sentence-pair --- # {MODEL_NAME} This model is a product of the [Sentence-Transformers](https://www.SBERT.net) family. It refines sentences and paragraphs into a sophisticated 768-dimensional vector space. It owes its precision to a fine-tuning process, executed on a dataset comprised of over one hundred thousand rows of pairs of sentences rooted in biblical context, thus able to discern similarities between two sentences talking about the same biblical essence. This enriched dataset—bountiful in biblically sound & matching sentence pairs—is accessible on the hub, referenced as [odunola/bible-reference-sentence-pair](https://huggingface.co/datasets/odunola/bible-reference-sentence-pair). As a result of this intensive learning process, the model possesses an uncanny knack for recognising parallels in seemingly disparate biblical discussions. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 10940 with parameters: ``` {'batch_size': 32} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 5, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 5470, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
VK246/IC_ver5b_coco_swin_gpt2_01pc_1e
VK246
2023-07-12T20:14:54Z
46
0
transformers
[ "transformers", "pytorch", "tensorboard", "vision-encoder-decoder", "image-text-to-text", "generated_from_trainer", "dataset:coco", "endpoints_compatible", "region:us" ]
image-text-to-text
2023-07-12T19:47:07Z
--- tags: - generated_from_trainer datasets: - coco metrics: - rouge - bleu model-index: - name: IC_ver5b_coco_swin_gpt2_01pc_1e results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # IC_ver5b_coco_swin_gpt2_01pc_1e This model is a fine-tuned version of [VK246/IC_ver5a_coco_swin_gpt2_05pc_1e](https://huggingface.co/VK246/IC_ver5a_coco_swin_gpt2_05pc_1e) on the coco dataset. It achieves the following results on the evaluation set: - Loss: 1.1266 - Rouge1: 27.4772 - Rouge2: 5.9305 - Rougel: 25.1138 - Rougelsum: 25.1235 - Bleu: 2.437 - Gen Len: 11.1124 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 96 - eval_batch_size: 96 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:------:|:-------:| | 1.2093 | 0.42 | 25 | 1.1552 | 22.8898 | 3.6353 | 20.6781 | 20.6737 | 1.1554 | 11.1124 | | 1.2149 | 0.85 | 50 | 1.1358 | 26.2857 | 5.2765 | 24.0266 | 24.0308 | 2.1954 | 11.1124 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
jd06/TwoSentenceHorrorModel
jd06
2023-07-12T20:14:37Z
211
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-11T20:51:49Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: TwoSentenceHorrorModel results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # TwoSentenceHorrorModel This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 4.3563 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 1 | 4.7786 | | No log | 2.0 | 2 | 4.4930 | | No log | 3.0 | 3 | 4.3563 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
grace-pro/xlmr-finetuned-igbo
grace-pro
2023-07-12T20:02:08Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-07-12T18:22:59Z
--- license: mit tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: xlmr-finetuned-igbo results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlmr-finetuned-igbo This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2323 - Precision: 0.7134 - Recall: 0.4641 - F1: 0.5623 - Accuracy: 0.9188 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.284 | 1.0 | 1257 | 0.2690 | 0.7177 | 0.2740 | 0.3966 | 0.9019 | | 0.2383 | 2.0 | 2514 | 0.2597 | 0.7436 | 0.3418 | 0.4683 | 0.9101 | | 0.2108 | 3.0 | 3771 | 0.2241 | 0.7097 | 0.4378 | 0.5416 | 0.9161 | | 0.1925 | 4.0 | 5028 | 0.2323 | 0.7274 | 0.4343 | 0.5439 | 0.9173 | | 0.1774 | 5.0 | 6285 | 0.2323 | 0.7134 | 0.4641 | 0.5623 | 0.9188 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
macapa/segmentation-mod
macapa
2023-07-12T19:59:53Z
0
0
fastai
[ "fastai", "region:us" ]
null
2023-07-12T19:59:38Z
--- tags: - fastai --- # Amazing! 🥳 Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)! Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
newsrx/instructor-large
newsrx
2023-07-12T19:56:14Z
7
0
sentence-transformers
[ "sentence-transformers", "pytorch", "t5", "text-embedding", "embeddings", "information-retrieval", "beir", "text-classification", "language-model", "text-clustering", "text-semantic-similarity", "text-evaluation", "prompt-retrieval", "text-reranking", "feature-extraction", "sentence-similarity", "transformers", "English", "Sentence Similarity", "natural_questions", "ms_marco", "fever", "hotpot_qa", "mteb", "en", "arxiv:2212.09741", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "region:us" ]
sentence-similarity
2023-07-12T19:56:14Z
--- pipeline_tag: sentence-similarity tags: - text-embedding - embeddings - information-retrieval - beir - text-classification - language-model - text-clustering - text-semantic-similarity - text-evaluation - prompt-retrieval - text-reranking - sentence-transformers - feature-extraction - sentence-similarity - transformers - t5 - English - Sentence Similarity - natural_questions - ms_marco - fever - hotpot_qa - mteb language: en inference: false license: apache-2.0 model-index: - name: INSTRUCTOR results: - task: type: Classification dataset: type: mteb/amazon_counterfactual name: MTEB AmazonCounterfactualClassification (en) config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 88.13432835820896 - type: ap value: 59.298209334395665 - type: f1 value: 83.31769058643586 - task: type: Classification dataset: type: mteb/amazon_polarity name: MTEB AmazonPolarityClassification config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 91.526375 - type: ap value: 88.16327709705504 - type: f1 value: 91.51095801287843 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (en) config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 47.856 - type: f1 value: 45.41490917650942 - task: type: Retrieval dataset: type: arguana name: MTEB ArguAna config: default split: test revision: None metrics: - type: map_at_1 value: 31.223 - type: map_at_10 value: 47.947 - type: map_at_100 value: 48.742000000000004 - type: map_at_1000 value: 48.745 - type: map_at_3 value: 43.137 - type: map_at_5 value: 45.992 - type: mrr_at_1 value: 32.432 - type: mrr_at_10 value: 48.4 - type: mrr_at_100 value: 49.202 - type: mrr_at_1000 value: 49.205 - type: mrr_at_3 value: 43.551 - type: mrr_at_5 value: 46.467999999999996 - type: ndcg_at_1 value: 31.223 - type: ndcg_at_10 value: 57.045 - type: ndcg_at_100 value: 60.175 - type: ndcg_at_1000 value: 60.233000000000004 - type: ndcg_at_3 value: 47.171 - type: ndcg_at_5 value: 52.322 - type: precision_at_1 value: 31.223 - type: precision_at_10 value: 8.599 - type: precision_at_100 value: 0.991 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 19.63 - type: precision_at_5 value: 14.282 - type: recall_at_1 value: 31.223 - type: recall_at_10 value: 85.989 - type: recall_at_100 value: 99.075 - type: recall_at_1000 value: 99.502 - type: recall_at_3 value: 58.89 - type: recall_at_5 value: 71.408 - task: type: Clustering dataset: type: mteb/arxiv-clustering-p2p name: MTEB ArxivClusteringP2P config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 43.1621946393635 - task: type: Clustering dataset: type: mteb/arxiv-clustering-s2s name: MTEB ArxivClusteringS2S config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 32.56417132407894 - task: type: Reranking dataset: type: mteb/askubuntudupquestions-reranking name: MTEB AskUbuntuDupQuestions config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 64.29539304390207 - type: mrr value: 76.44484017060196 - task: type: STS dataset: type: mteb/biosses-sts name: MTEB BIOSSES config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_spearman value: 84.38746499431112 - task: type: Classification dataset: type: mteb/banking77 name: MTEB Banking77Classification config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 78.51298701298701 - type: f1 value: 77.49041754069235 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-p2p name: MTEB BiorxivClusteringP2P config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 37.61848554098577 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-s2s name: MTEB BiorxivClusteringS2S config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 31.32623280148178 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackAndroidRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 35.803000000000004 - type: map_at_10 value: 48.848 - type: map_at_100 value: 50.5 - type: map_at_1000 value: 50.602999999999994 - type: map_at_3 value: 45.111000000000004 - type: map_at_5 value: 47.202 - type: mrr_at_1 value: 44.635000000000005 - type: mrr_at_10 value: 55.593 - type: mrr_at_100 value: 56.169999999999995 - type: mrr_at_1000 value: 56.19499999999999 - type: mrr_at_3 value: 53.361999999999995 - type: mrr_at_5 value: 54.806999999999995 - type: ndcg_at_1 value: 44.635000000000005 - type: ndcg_at_10 value: 55.899 - type: ndcg_at_100 value: 60.958 - type: ndcg_at_1000 value: 62.302 - type: ndcg_at_3 value: 51.051 - type: ndcg_at_5 value: 53.351000000000006 - type: precision_at_1 value: 44.635000000000005 - type: precision_at_10 value: 10.786999999999999 - type: precision_at_100 value: 1.6580000000000001 - type: precision_at_1000 value: 0.213 - type: precision_at_3 value: 24.893 - type: precision_at_5 value: 17.740000000000002 - type: recall_at_1 value: 35.803000000000004 - type: recall_at_10 value: 68.657 - type: recall_at_100 value: 89.77199999999999 - type: recall_at_1000 value: 97.67 - type: recall_at_3 value: 54.066 - type: recall_at_5 value: 60.788 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackEnglishRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 33.706 - type: map_at_10 value: 44.896 - type: map_at_100 value: 46.299 - type: map_at_1000 value: 46.44 - type: map_at_3 value: 41.721000000000004 - type: map_at_5 value: 43.486000000000004 - type: mrr_at_1 value: 41.592 - type: mrr_at_10 value: 50.529 - type: mrr_at_100 value: 51.22 - type: mrr_at_1000 value: 51.258 - type: mrr_at_3 value: 48.205999999999996 - type: mrr_at_5 value: 49.528 - type: ndcg_at_1 value: 41.592 - type: ndcg_at_10 value: 50.77199999999999 - type: ndcg_at_100 value: 55.383 - type: ndcg_at_1000 value: 57.288 - type: ndcg_at_3 value: 46.324 - type: ndcg_at_5 value: 48.346000000000004 - type: precision_at_1 value: 41.592 - type: precision_at_10 value: 9.516 - type: precision_at_100 value: 1.541 - type: precision_at_1000 value: 0.2 - type: precision_at_3 value: 22.399 - type: precision_at_5 value: 15.770999999999999 - type: recall_at_1 value: 33.706 - type: recall_at_10 value: 61.353 - type: recall_at_100 value: 80.182 - type: recall_at_1000 value: 91.896 - type: recall_at_3 value: 48.204 - type: recall_at_5 value: 53.89699999999999 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackGamingRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 44.424 - type: map_at_10 value: 57.169000000000004 - type: map_at_100 value: 58.202 - type: map_at_1000 value: 58.242000000000004 - type: map_at_3 value: 53.825 - type: map_at_5 value: 55.714 - type: mrr_at_1 value: 50.470000000000006 - type: mrr_at_10 value: 60.489000000000004 - type: mrr_at_100 value: 61.096 - type: mrr_at_1000 value: 61.112 - type: mrr_at_3 value: 58.192 - type: mrr_at_5 value: 59.611999999999995 - type: ndcg_at_1 value: 50.470000000000006 - type: ndcg_at_10 value: 63.071999999999996 - type: ndcg_at_100 value: 66.964 - type: ndcg_at_1000 value: 67.659 - type: ndcg_at_3 value: 57.74399999999999 - type: ndcg_at_5 value: 60.367000000000004 - type: precision_at_1 value: 50.470000000000006 - type: precision_at_10 value: 10.019 - type: precision_at_100 value: 1.29 - type: precision_at_1000 value: 0.13899999999999998 - type: precision_at_3 value: 25.558999999999997 - type: precision_at_5 value: 17.467 - type: recall_at_1 value: 44.424 - type: recall_at_10 value: 77.02 - type: recall_at_100 value: 93.738 - type: recall_at_1000 value: 98.451 - type: recall_at_3 value: 62.888 - type: recall_at_5 value: 69.138 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackGisRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 26.294 - type: map_at_10 value: 34.503 - type: map_at_100 value: 35.641 - type: map_at_1000 value: 35.724000000000004 - type: map_at_3 value: 31.753999999999998 - type: map_at_5 value: 33.190999999999995 - type: mrr_at_1 value: 28.362 - type: mrr_at_10 value: 36.53 - type: mrr_at_100 value: 37.541000000000004 - type: mrr_at_1000 value: 37.602000000000004 - type: mrr_at_3 value: 33.917 - type: mrr_at_5 value: 35.358000000000004 - type: ndcg_at_1 value: 28.362 - type: ndcg_at_10 value: 39.513999999999996 - type: ndcg_at_100 value: 44.815 - type: ndcg_at_1000 value: 46.839 - type: ndcg_at_3 value: 34.02 - type: ndcg_at_5 value: 36.522 - type: precision_at_1 value: 28.362 - type: precision_at_10 value: 6.101999999999999 - type: precision_at_100 value: 0.9129999999999999 - type: precision_at_1000 value: 0.11399999999999999 - type: precision_at_3 value: 14.161999999999999 - type: precision_at_5 value: 9.966 - type: recall_at_1 value: 26.294 - type: recall_at_10 value: 53.098 - type: recall_at_100 value: 76.877 - type: recall_at_1000 value: 91.834 - type: recall_at_3 value: 38.266 - type: recall_at_5 value: 44.287 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackMathematicaRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 16.407 - type: map_at_10 value: 25.185999999999996 - type: map_at_100 value: 26.533 - type: map_at_1000 value: 26.657999999999998 - type: map_at_3 value: 22.201999999999998 - type: map_at_5 value: 23.923 - type: mrr_at_1 value: 20.522000000000002 - type: mrr_at_10 value: 29.522 - type: mrr_at_100 value: 30.644 - type: mrr_at_1000 value: 30.713 - type: mrr_at_3 value: 26.679000000000002 - type: mrr_at_5 value: 28.483000000000004 - type: ndcg_at_1 value: 20.522000000000002 - type: ndcg_at_10 value: 30.656 - type: ndcg_at_100 value: 36.864999999999995 - type: ndcg_at_1000 value: 39.675 - type: ndcg_at_3 value: 25.319000000000003 - type: ndcg_at_5 value: 27.992 - type: precision_at_1 value: 20.522000000000002 - type: precision_at_10 value: 5.795999999999999 - type: precision_at_100 value: 1.027 - type: precision_at_1000 value: 0.13999999999999999 - type: precision_at_3 value: 12.396 - type: precision_at_5 value: 9.328 - type: recall_at_1 value: 16.407 - type: recall_at_10 value: 43.164 - type: recall_at_100 value: 69.695 - type: recall_at_1000 value: 89.41900000000001 - type: recall_at_3 value: 28.634999999999998 - type: recall_at_5 value: 35.308 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackPhysicsRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 30.473 - type: map_at_10 value: 41.676 - type: map_at_100 value: 43.120999999999995 - type: map_at_1000 value: 43.230000000000004 - type: map_at_3 value: 38.306000000000004 - type: map_at_5 value: 40.355999999999995 - type: mrr_at_1 value: 37.536 - type: mrr_at_10 value: 47.643 - type: mrr_at_100 value: 48.508 - type: mrr_at_1000 value: 48.551 - type: mrr_at_3 value: 45.348 - type: mrr_at_5 value: 46.744 - type: ndcg_at_1 value: 37.536 - type: ndcg_at_10 value: 47.823 - type: ndcg_at_100 value: 53.395 - type: ndcg_at_1000 value: 55.271 - type: ndcg_at_3 value: 42.768 - type: ndcg_at_5 value: 45.373000000000005 - type: precision_at_1 value: 37.536 - type: precision_at_10 value: 8.681 - type: precision_at_100 value: 1.34 - type: precision_at_1000 value: 0.165 - type: precision_at_3 value: 20.468 - type: precision_at_5 value: 14.495 - type: recall_at_1 value: 30.473 - type: recall_at_10 value: 60.092999999999996 - type: recall_at_100 value: 82.733 - type: recall_at_1000 value: 94.875 - type: recall_at_3 value: 45.734 - type: recall_at_5 value: 52.691 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackProgrammersRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 29.976000000000003 - type: map_at_10 value: 41.097 - type: map_at_100 value: 42.547000000000004 - type: map_at_1000 value: 42.659000000000006 - type: map_at_3 value: 37.251 - type: map_at_5 value: 39.493 - type: mrr_at_1 value: 37.557 - type: mrr_at_10 value: 46.605000000000004 - type: mrr_at_100 value: 47.487 - type: mrr_at_1000 value: 47.54 - type: mrr_at_3 value: 43.721 - type: mrr_at_5 value: 45.411 - type: ndcg_at_1 value: 37.557 - type: ndcg_at_10 value: 47.449000000000005 - type: ndcg_at_100 value: 53.052 - type: ndcg_at_1000 value: 55.010999999999996 - type: ndcg_at_3 value: 41.439 - type: ndcg_at_5 value: 44.292 - type: precision_at_1 value: 37.557 - type: precision_at_10 value: 8.847 - type: precision_at_100 value: 1.357 - type: precision_at_1000 value: 0.16999999999999998 - type: precision_at_3 value: 20.091 - type: precision_at_5 value: 14.384 - type: recall_at_1 value: 29.976000000000003 - type: recall_at_10 value: 60.99099999999999 - type: recall_at_100 value: 84.245 - type: recall_at_1000 value: 96.97200000000001 - type: recall_at_3 value: 43.794 - type: recall_at_5 value: 51.778999999999996 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 28.099166666666665 - type: map_at_10 value: 38.1365 - type: map_at_100 value: 39.44491666666667 - type: map_at_1000 value: 39.55858333333334 - type: map_at_3 value: 35.03641666666666 - type: map_at_5 value: 36.79833333333334 - type: mrr_at_1 value: 33.39966666666667 - type: mrr_at_10 value: 42.42583333333333 - type: mrr_at_100 value: 43.28575 - type: mrr_at_1000 value: 43.33741666666667 - type: mrr_at_3 value: 39.94975 - type: mrr_at_5 value: 41.41633333333334 - type: ndcg_at_1 value: 33.39966666666667 - type: ndcg_at_10 value: 43.81741666666667 - type: ndcg_at_100 value: 49.08166666666667 - type: ndcg_at_1000 value: 51.121166666666674 - type: ndcg_at_3 value: 38.73575 - type: ndcg_at_5 value: 41.18158333333333 - type: precision_at_1 value: 33.39966666666667 - type: precision_at_10 value: 7.738916666666667 - type: precision_at_100 value: 1.2265833333333331 - type: precision_at_1000 value: 0.15983333333333336 - type: precision_at_3 value: 17.967416666666665 - type: precision_at_5 value: 12.78675 - type: recall_at_1 value: 28.099166666666665 - type: recall_at_10 value: 56.27049999999999 - type: recall_at_100 value: 78.93291666666667 - type: recall_at_1000 value: 92.81608333333334 - type: recall_at_3 value: 42.09775 - type: recall_at_5 value: 48.42533333333334 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackStatsRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 23.663 - type: map_at_10 value: 30.377 - type: map_at_100 value: 31.426 - type: map_at_1000 value: 31.519000000000002 - type: map_at_3 value: 28.069 - type: map_at_5 value: 29.256999999999998 - type: mrr_at_1 value: 26.687 - type: mrr_at_10 value: 33.107 - type: mrr_at_100 value: 34.055 - type: mrr_at_1000 value: 34.117999999999995 - type: mrr_at_3 value: 31.058000000000003 - type: mrr_at_5 value: 32.14 - type: ndcg_at_1 value: 26.687 - type: ndcg_at_10 value: 34.615 - type: ndcg_at_100 value: 39.776 - type: ndcg_at_1000 value: 42.05 - type: ndcg_at_3 value: 30.322 - type: ndcg_at_5 value: 32.157000000000004 - type: precision_at_1 value: 26.687 - type: precision_at_10 value: 5.491 - type: precision_at_100 value: 0.877 - type: precision_at_1000 value: 0.11499999999999999 - type: precision_at_3 value: 13.139000000000001 - type: precision_at_5 value: 9.049 - type: recall_at_1 value: 23.663 - type: recall_at_10 value: 45.035 - type: recall_at_100 value: 68.554 - type: recall_at_1000 value: 85.077 - type: recall_at_3 value: 32.982 - type: recall_at_5 value: 37.688 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackTexRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 17.403 - type: map_at_10 value: 25.197000000000003 - type: map_at_100 value: 26.355 - type: map_at_1000 value: 26.487 - type: map_at_3 value: 22.733 - type: map_at_5 value: 24.114 - type: mrr_at_1 value: 21.37 - type: mrr_at_10 value: 29.091 - type: mrr_at_100 value: 30.018 - type: mrr_at_1000 value: 30.096 - type: mrr_at_3 value: 26.887 - type: mrr_at_5 value: 28.157 - type: ndcg_at_1 value: 21.37 - type: ndcg_at_10 value: 30.026000000000003 - type: ndcg_at_100 value: 35.416 - type: ndcg_at_1000 value: 38.45 - type: ndcg_at_3 value: 25.764 - type: ndcg_at_5 value: 27.742 - type: precision_at_1 value: 21.37 - type: precision_at_10 value: 5.609 - type: precision_at_100 value: 0.9860000000000001 - type: precision_at_1000 value: 0.14300000000000002 - type: precision_at_3 value: 12.423 - type: precision_at_5 value: 9.009 - type: recall_at_1 value: 17.403 - type: recall_at_10 value: 40.573 - type: recall_at_100 value: 64.818 - type: recall_at_1000 value: 86.53699999999999 - type: recall_at_3 value: 28.493000000000002 - type: recall_at_5 value: 33.660000000000004 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackUnixRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 28.639 - type: map_at_10 value: 38.951 - type: map_at_100 value: 40.238 - type: map_at_1000 value: 40.327 - type: map_at_3 value: 35.842 - type: map_at_5 value: 37.617 - type: mrr_at_1 value: 33.769 - type: mrr_at_10 value: 43.088 - type: mrr_at_100 value: 44.03 - type: mrr_at_1000 value: 44.072 - type: mrr_at_3 value: 40.656 - type: mrr_at_5 value: 42.138999999999996 - type: ndcg_at_1 value: 33.769 - type: ndcg_at_10 value: 44.676 - type: ndcg_at_100 value: 50.416000000000004 - type: ndcg_at_1000 value: 52.227999999999994 - type: ndcg_at_3 value: 39.494 - type: ndcg_at_5 value: 42.013 - type: precision_at_1 value: 33.769 - type: precision_at_10 value: 7.668 - type: precision_at_100 value: 1.18 - type: precision_at_1000 value: 0.145 - type: precision_at_3 value: 18.221 - type: precision_at_5 value: 12.966 - type: recall_at_1 value: 28.639 - type: recall_at_10 value: 57.687999999999995 - type: recall_at_100 value: 82.541 - type: recall_at_1000 value: 94.896 - type: recall_at_3 value: 43.651 - type: recall_at_5 value: 49.925999999999995 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackWebmastersRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 29.57 - type: map_at_10 value: 40.004 - type: map_at_100 value: 41.75 - type: map_at_1000 value: 41.97 - type: map_at_3 value: 36.788 - type: map_at_5 value: 38.671 - type: mrr_at_1 value: 35.375 - type: mrr_at_10 value: 45.121 - type: mrr_at_100 value: 45.994 - type: mrr_at_1000 value: 46.04 - type: mrr_at_3 value: 42.227 - type: mrr_at_5 value: 43.995 - type: ndcg_at_1 value: 35.375 - type: ndcg_at_10 value: 46.392 - type: ndcg_at_100 value: 52.196 - type: ndcg_at_1000 value: 54.274 - type: ndcg_at_3 value: 41.163 - type: ndcg_at_5 value: 43.813 - type: precision_at_1 value: 35.375 - type: precision_at_10 value: 8.676 - type: precision_at_100 value: 1.678 - type: precision_at_1000 value: 0.253 - type: precision_at_3 value: 19.104 - type: precision_at_5 value: 13.913 - type: recall_at_1 value: 29.57 - type: recall_at_10 value: 58.779 - type: recall_at_100 value: 83.337 - type: recall_at_1000 value: 95.979 - type: recall_at_3 value: 44.005 - type: recall_at_5 value: 50.975 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackWordpressRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 20.832 - type: map_at_10 value: 29.733999999999998 - type: map_at_100 value: 30.727 - type: map_at_1000 value: 30.843999999999998 - type: map_at_3 value: 26.834999999999997 - type: map_at_5 value: 28.555999999999997 - type: mrr_at_1 value: 22.921 - type: mrr_at_10 value: 31.791999999999998 - type: mrr_at_100 value: 32.666000000000004 - type: mrr_at_1000 value: 32.751999999999995 - type: mrr_at_3 value: 29.144 - type: mrr_at_5 value: 30.622 - type: ndcg_at_1 value: 22.921 - type: ndcg_at_10 value: 34.915 - type: ndcg_at_100 value: 39.744 - type: ndcg_at_1000 value: 42.407000000000004 - type: ndcg_at_3 value: 29.421000000000003 - type: ndcg_at_5 value: 32.211 - type: precision_at_1 value: 22.921 - type: precision_at_10 value: 5.675 - type: precision_at_100 value: 0.872 - type: precision_at_1000 value: 0.121 - type: precision_at_3 value: 12.753999999999998 - type: precision_at_5 value: 9.353 - type: recall_at_1 value: 20.832 - type: recall_at_10 value: 48.795 - type: recall_at_100 value: 70.703 - type: recall_at_1000 value: 90.187 - type: recall_at_3 value: 34.455000000000005 - type: recall_at_5 value: 40.967 - task: type: Retrieval dataset: type: climate-fever name: MTEB ClimateFEVER config: default split: test revision: None metrics: - type: map_at_1 value: 10.334 - type: map_at_10 value: 19.009999999999998 - type: map_at_100 value: 21.129 - type: map_at_1000 value: 21.328 - type: map_at_3 value: 15.152 - type: map_at_5 value: 17.084 - type: mrr_at_1 value: 23.453 - type: mrr_at_10 value: 36.099 - type: mrr_at_100 value: 37.069 - type: mrr_at_1000 value: 37.104 - type: mrr_at_3 value: 32.096000000000004 - type: mrr_at_5 value: 34.451 - type: ndcg_at_1 value: 23.453 - type: ndcg_at_10 value: 27.739000000000004 - type: ndcg_at_100 value: 35.836 - type: ndcg_at_1000 value: 39.242 - type: ndcg_at_3 value: 21.263 - type: ndcg_at_5 value: 23.677 - type: precision_at_1 value: 23.453 - type: precision_at_10 value: 9.199 - type: precision_at_100 value: 1.791 - type: precision_at_1000 value: 0.242 - type: precision_at_3 value: 16.2 - type: precision_at_5 value: 13.147 - type: recall_at_1 value: 10.334 - type: recall_at_10 value: 35.177 - type: recall_at_100 value: 63.009 - type: recall_at_1000 value: 81.938 - type: recall_at_3 value: 19.914 - type: recall_at_5 value: 26.077 - task: type: Retrieval dataset: type: dbpedia-entity name: MTEB DBPedia config: default split: test revision: None metrics: - type: map_at_1 value: 8.212 - type: map_at_10 value: 17.386 - type: map_at_100 value: 24.234 - type: map_at_1000 value: 25.724999999999998 - type: map_at_3 value: 12.727 - type: map_at_5 value: 14.785 - type: mrr_at_1 value: 59.25 - type: mrr_at_10 value: 68.687 - type: mrr_at_100 value: 69.133 - type: mrr_at_1000 value: 69.14099999999999 - type: mrr_at_3 value: 66.917 - type: mrr_at_5 value: 67.742 - type: ndcg_at_1 value: 48.625 - type: ndcg_at_10 value: 36.675999999999995 - type: ndcg_at_100 value: 41.543 - type: ndcg_at_1000 value: 49.241 - type: ndcg_at_3 value: 41.373 - type: ndcg_at_5 value: 38.707 - type: precision_at_1 value: 59.25 - type: precision_at_10 value: 28.525 - type: precision_at_100 value: 9.027000000000001 - type: precision_at_1000 value: 1.8339999999999999 - type: precision_at_3 value: 44.833 - type: precision_at_5 value: 37.35 - type: recall_at_1 value: 8.212 - type: recall_at_10 value: 23.188 - type: recall_at_100 value: 48.613 - type: recall_at_1000 value: 73.093 - type: recall_at_3 value: 14.419 - type: recall_at_5 value: 17.798 - task: type: Classification dataset: type: mteb/emotion name: MTEB EmotionClassification config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 52.725 - type: f1 value: 46.50743309855908 - task: type: Retrieval dataset: type: fever name: MTEB FEVER config: default split: test revision: None metrics: - type: map_at_1 value: 55.086 - type: map_at_10 value: 66.914 - type: map_at_100 value: 67.321 - type: map_at_1000 value: 67.341 - type: map_at_3 value: 64.75800000000001 - type: map_at_5 value: 66.189 - type: mrr_at_1 value: 59.28600000000001 - type: mrr_at_10 value: 71.005 - type: mrr_at_100 value: 71.304 - type: mrr_at_1000 value: 71.313 - type: mrr_at_3 value: 69.037 - type: mrr_at_5 value: 70.35 - type: ndcg_at_1 value: 59.28600000000001 - type: ndcg_at_10 value: 72.695 - type: ndcg_at_100 value: 74.432 - type: ndcg_at_1000 value: 74.868 - type: ndcg_at_3 value: 68.72200000000001 - type: ndcg_at_5 value: 71.081 - type: precision_at_1 value: 59.28600000000001 - type: precision_at_10 value: 9.499 - type: precision_at_100 value: 1.052 - type: precision_at_1000 value: 0.11100000000000002 - type: precision_at_3 value: 27.503 - type: precision_at_5 value: 17.854999999999997 - type: recall_at_1 value: 55.086 - type: recall_at_10 value: 86.453 - type: recall_at_100 value: 94.028 - type: recall_at_1000 value: 97.052 - type: recall_at_3 value: 75.821 - type: recall_at_5 value: 81.6 - task: type: Retrieval dataset: type: fiqa name: MTEB FiQA2018 config: default split: test revision: None metrics: - type: map_at_1 value: 22.262999999999998 - type: map_at_10 value: 37.488 - type: map_at_100 value: 39.498 - type: map_at_1000 value: 39.687 - type: map_at_3 value: 32.529 - type: map_at_5 value: 35.455 - type: mrr_at_1 value: 44.907000000000004 - type: mrr_at_10 value: 53.239000000000004 - type: mrr_at_100 value: 54.086 - type: mrr_at_1000 value: 54.122 - type: mrr_at_3 value: 51.235 - type: mrr_at_5 value: 52.415 - type: ndcg_at_1 value: 44.907000000000004 - type: ndcg_at_10 value: 45.446 - type: ndcg_at_100 value: 52.429 - type: ndcg_at_1000 value: 55.169000000000004 - type: ndcg_at_3 value: 41.882000000000005 - type: ndcg_at_5 value: 43.178 - type: precision_at_1 value: 44.907000000000004 - type: precision_at_10 value: 12.931999999999999 - type: precision_at_100 value: 2.025 - type: precision_at_1000 value: 0.248 - type: precision_at_3 value: 28.652 - type: precision_at_5 value: 21.204 - type: recall_at_1 value: 22.262999999999998 - type: recall_at_10 value: 52.447 - type: recall_at_100 value: 78.045 - type: recall_at_1000 value: 94.419 - type: recall_at_3 value: 38.064 - type: recall_at_5 value: 44.769 - task: type: Retrieval dataset: type: hotpotqa name: MTEB HotpotQA config: default split: test revision: None metrics: - type: map_at_1 value: 32.519 - type: map_at_10 value: 45.831 - type: map_at_100 value: 46.815 - type: map_at_1000 value: 46.899 - type: map_at_3 value: 42.836 - type: map_at_5 value: 44.65 - type: mrr_at_1 value: 65.037 - type: mrr_at_10 value: 72.16 - type: mrr_at_100 value: 72.51100000000001 - type: mrr_at_1000 value: 72.53 - type: mrr_at_3 value: 70.682 - type: mrr_at_5 value: 71.54599999999999 - type: ndcg_at_1 value: 65.037 - type: ndcg_at_10 value: 55.17999999999999 - type: ndcg_at_100 value: 58.888 - type: ndcg_at_1000 value: 60.648 - type: ndcg_at_3 value: 50.501 - type: ndcg_at_5 value: 52.977 - type: precision_at_1 value: 65.037 - type: precision_at_10 value: 11.530999999999999 - type: precision_at_100 value: 1.4460000000000002 - type: precision_at_1000 value: 0.168 - type: precision_at_3 value: 31.483 - type: precision_at_5 value: 20.845 - type: recall_at_1 value: 32.519 - type: recall_at_10 value: 57.657000000000004 - type: recall_at_100 value: 72.30199999999999 - type: recall_at_1000 value: 84.024 - type: recall_at_3 value: 47.225 - type: recall_at_5 value: 52.113 - task: type: Classification dataset: type: mteb/imdb name: MTEB ImdbClassification config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 88.3168 - type: ap value: 83.80165516037135 - type: f1 value: 88.29942471066407 - task: type: Retrieval dataset: type: msmarco name: MTEB MSMARCO config: default split: dev revision: None metrics: - type: map_at_1 value: 20.724999999999998 - type: map_at_10 value: 32.736 - type: map_at_100 value: 33.938 - type: map_at_1000 value: 33.991 - type: map_at_3 value: 28.788000000000004 - type: map_at_5 value: 31.016 - type: mrr_at_1 value: 21.361 - type: mrr_at_10 value: 33.323 - type: mrr_at_100 value: 34.471000000000004 - type: mrr_at_1000 value: 34.518 - type: mrr_at_3 value: 29.453000000000003 - type: mrr_at_5 value: 31.629 - type: ndcg_at_1 value: 21.361 - type: ndcg_at_10 value: 39.649 - type: ndcg_at_100 value: 45.481 - type: ndcg_at_1000 value: 46.775 - type: ndcg_at_3 value: 31.594 - type: ndcg_at_5 value: 35.543 - type: precision_at_1 value: 21.361 - type: precision_at_10 value: 6.3740000000000006 - type: precision_at_100 value: 0.931 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 13.514999999999999 - type: precision_at_5 value: 10.100000000000001 - type: recall_at_1 value: 20.724999999999998 - type: recall_at_10 value: 61.034 - type: recall_at_100 value: 88.062 - type: recall_at_1000 value: 97.86399999999999 - type: recall_at_3 value: 39.072 - type: recall_at_5 value: 48.53 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (en) config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 93.8919288645691 - type: f1 value: 93.57059586398059 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (en) config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 67.97993616051072 - type: f1 value: 48.244319183606535 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (en) config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 68.90047074646941 - type: f1 value: 66.48999056063725 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (en) config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.34566240753195 - type: f1 value: 73.54164154290658 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-p2p name: MTEB MedrxivClusteringP2P config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 34.21866934757011 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-s2s name: MTEB MedrxivClusteringS2S config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 32.000936217235534 - task: type: Reranking dataset: type: mteb/mind_small name: MTEB MindSmallReranking config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 31.68189362520352 - type: mrr value: 32.69603637784303 - task: type: Retrieval dataset: type: nfcorpus name: MTEB NFCorpus config: default split: test revision: None metrics: - type: map_at_1 value: 6.078 - type: map_at_10 value: 12.671 - type: map_at_100 value: 16.291 - type: map_at_1000 value: 17.855999999999998 - type: map_at_3 value: 9.610000000000001 - type: map_at_5 value: 11.152 - type: mrr_at_1 value: 43.963 - type: mrr_at_10 value: 53.173 - type: mrr_at_100 value: 53.718999999999994 - type: mrr_at_1000 value: 53.756 - type: mrr_at_3 value: 50.980000000000004 - type: mrr_at_5 value: 52.42 - type: ndcg_at_1 value: 42.415000000000006 - type: ndcg_at_10 value: 34.086 - type: ndcg_at_100 value: 32.545 - type: ndcg_at_1000 value: 41.144999999999996 - type: ndcg_at_3 value: 39.434999999999995 - type: ndcg_at_5 value: 37.888 - type: precision_at_1 value: 43.653 - type: precision_at_10 value: 25.014999999999997 - type: precision_at_100 value: 8.594 - type: precision_at_1000 value: 2.169 - type: precision_at_3 value: 37.049 - type: precision_at_5 value: 33.065 - type: recall_at_1 value: 6.078 - type: recall_at_10 value: 16.17 - type: recall_at_100 value: 34.512 - type: recall_at_1000 value: 65.447 - type: recall_at_3 value: 10.706 - type: recall_at_5 value: 13.158 - task: type: Retrieval dataset: type: nq name: MTEB NQ config: default split: test revision: None metrics: - type: map_at_1 value: 27.378000000000004 - type: map_at_10 value: 42.178 - type: map_at_100 value: 43.32 - type: map_at_1000 value: 43.358000000000004 - type: map_at_3 value: 37.474000000000004 - type: map_at_5 value: 40.333000000000006 - type: mrr_at_1 value: 30.823 - type: mrr_at_10 value: 44.626 - type: mrr_at_100 value: 45.494 - type: mrr_at_1000 value: 45.519 - type: mrr_at_3 value: 40.585 - type: mrr_at_5 value: 43.146 - type: ndcg_at_1 value: 30.794 - type: ndcg_at_10 value: 50.099000000000004 - type: ndcg_at_100 value: 54.900999999999996 - type: ndcg_at_1000 value: 55.69499999999999 - type: ndcg_at_3 value: 41.238 - type: ndcg_at_5 value: 46.081 - type: precision_at_1 value: 30.794 - type: precision_at_10 value: 8.549 - type: precision_at_100 value: 1.124 - type: precision_at_1000 value: 0.12 - type: precision_at_3 value: 18.926000000000002 - type: precision_at_5 value: 14.16 - type: recall_at_1 value: 27.378000000000004 - type: recall_at_10 value: 71.842 - type: recall_at_100 value: 92.565 - type: recall_at_1000 value: 98.402 - type: recall_at_3 value: 49.053999999999995 - type: recall_at_5 value: 60.207 - task: type: Retrieval dataset: type: quora name: MTEB QuoraRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 70.557 - type: map_at_10 value: 84.729 - type: map_at_100 value: 85.369 - type: map_at_1000 value: 85.382 - type: map_at_3 value: 81.72 - type: map_at_5 value: 83.613 - type: mrr_at_1 value: 81.3 - type: mrr_at_10 value: 87.488 - type: mrr_at_100 value: 87.588 - type: mrr_at_1000 value: 87.589 - type: mrr_at_3 value: 86.53 - type: mrr_at_5 value: 87.18599999999999 - type: ndcg_at_1 value: 81.28999999999999 - type: ndcg_at_10 value: 88.442 - type: ndcg_at_100 value: 89.637 - type: ndcg_at_1000 value: 89.70700000000001 - type: ndcg_at_3 value: 85.55199999999999 - type: ndcg_at_5 value: 87.154 - type: precision_at_1 value: 81.28999999999999 - type: precision_at_10 value: 13.489999999999998 - type: precision_at_100 value: 1.54 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 37.553 - type: precision_at_5 value: 24.708 - type: recall_at_1 value: 70.557 - type: recall_at_10 value: 95.645 - type: recall_at_100 value: 99.693 - type: recall_at_1000 value: 99.995 - type: recall_at_3 value: 87.359 - type: recall_at_5 value: 91.89699999999999 - task: type: Clustering dataset: type: mteb/reddit-clustering name: MTEB RedditClustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 63.65060114776209 - task: type: Clustering dataset: type: mteb/reddit-clustering-p2p name: MTEB RedditClusteringP2P config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 64.63271250680617 - task: type: Retrieval dataset: type: scidocs name: MTEB SCIDOCS config: default split: test revision: None metrics: - type: map_at_1 value: 4.263 - type: map_at_10 value: 10.801 - type: map_at_100 value: 12.888 - type: map_at_1000 value: 13.224 - type: map_at_3 value: 7.362 - type: map_at_5 value: 9.149000000000001 - type: mrr_at_1 value: 21 - type: mrr_at_10 value: 31.416 - type: mrr_at_100 value: 32.513 - type: mrr_at_1000 value: 32.58 - type: mrr_at_3 value: 28.116999999999997 - type: mrr_at_5 value: 29.976999999999997 - type: ndcg_at_1 value: 21 - type: ndcg_at_10 value: 18.551000000000002 - type: ndcg_at_100 value: 26.657999999999998 - type: ndcg_at_1000 value: 32.485 - type: ndcg_at_3 value: 16.834 - type: ndcg_at_5 value: 15.204999999999998 - type: precision_at_1 value: 21 - type: precision_at_10 value: 9.84 - type: precision_at_100 value: 2.16 - type: precision_at_1000 value: 0.35500000000000004 - type: precision_at_3 value: 15.667 - type: precision_at_5 value: 13.62 - type: recall_at_1 value: 4.263 - type: recall_at_10 value: 19.922 - type: recall_at_100 value: 43.808 - type: recall_at_1000 value: 72.14500000000001 - type: recall_at_3 value: 9.493 - type: recall_at_5 value: 13.767999999999999 - task: type: STS dataset: type: mteb/sickr-sts name: MTEB SICK-R config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_spearman value: 81.27446313317233 - task: type: STS dataset: type: mteb/sts12-sts name: MTEB STS12 config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_spearman value: 76.27963301217527 - task: type: STS dataset: type: mteb/sts13-sts name: MTEB STS13 config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_spearman value: 88.18495048450949 - task: type: STS dataset: type: mteb/sts14-sts name: MTEB STS14 config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_spearman value: 81.91982338692046 - task: type: STS dataset: type: mteb/sts15-sts name: MTEB STS15 config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_spearman value: 89.00896818385291 - task: type: STS dataset: type: mteb/sts16-sts name: MTEB STS16 config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_spearman value: 85.48814644586132 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (en-en) config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_spearman value: 90.30116926966582 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (en) config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_spearman value: 67.74132963032342 - task: type: STS dataset: type: mteb/stsbenchmark-sts name: MTEB STSBenchmark config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_spearman value: 86.87741355780479 - task: type: Reranking dataset: type: mteb/scidocs-reranking name: MTEB SciDocsRR config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 82.0019012295875 - type: mrr value: 94.70267024188593 - task: type: Retrieval dataset: type: scifact name: MTEB SciFact config: default split: test revision: None metrics: - type: map_at_1 value: 50.05 - type: map_at_10 value: 59.36 - type: map_at_100 value: 59.967999999999996 - type: map_at_1000 value: 60.023 - type: map_at_3 value: 56.515 - type: map_at_5 value: 58.272999999999996 - type: mrr_at_1 value: 53 - type: mrr_at_10 value: 61.102000000000004 - type: mrr_at_100 value: 61.476 - type: mrr_at_1000 value: 61.523 - type: mrr_at_3 value: 58.778 - type: mrr_at_5 value: 60.128 - type: ndcg_at_1 value: 53 - type: ndcg_at_10 value: 64.43100000000001 - type: ndcg_at_100 value: 66.73599999999999 - type: ndcg_at_1000 value: 68.027 - type: ndcg_at_3 value: 59.279 - type: ndcg_at_5 value: 61.888 - type: precision_at_1 value: 53 - type: precision_at_10 value: 8.767 - type: precision_at_100 value: 1.01 - type: precision_at_1000 value: 0.11100000000000002 - type: precision_at_3 value: 23.444000000000003 - type: precision_at_5 value: 15.667 - type: recall_at_1 value: 50.05 - type: recall_at_10 value: 78.511 - type: recall_at_100 value: 88.5 - type: recall_at_1000 value: 98.333 - type: recall_at_3 value: 64.117 - type: recall_at_5 value: 70.867 - task: type: PairClassification dataset: type: mteb/sprintduplicatequestions-pairclassification name: MTEB SprintDuplicateQuestions config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.72178217821782 - type: cos_sim_ap value: 93.0728601593541 - type: cos_sim_f1 value: 85.6727976766699 - type: cos_sim_precision value: 83.02063789868667 - type: cos_sim_recall value: 88.5 - type: dot_accuracy value: 99.72178217821782 - type: dot_ap value: 93.07287396168348 - type: dot_f1 value: 85.6727976766699 - type: dot_precision value: 83.02063789868667 - type: dot_recall value: 88.5 - type: euclidean_accuracy value: 99.72178217821782 - type: euclidean_ap value: 93.07285657982895 - type: euclidean_f1 value: 85.6727976766699 - type: euclidean_precision value: 83.02063789868667 - type: euclidean_recall value: 88.5 - type: manhattan_accuracy value: 99.72475247524753 - type: manhattan_ap value: 93.02792973059809 - type: manhattan_f1 value: 85.7727737973388 - type: manhattan_precision value: 87.84067085953879 - type: manhattan_recall value: 83.8 - type: max_accuracy value: 99.72475247524753 - type: max_ap value: 93.07287396168348 - type: max_f1 value: 85.7727737973388 - task: type: Clustering dataset: type: mteb/stackexchange-clustering name: MTEB StackExchangeClustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 68.77583615550819 - task: type: Clustering dataset: type: mteb/stackexchange-clustering-p2p name: MTEB StackExchangeClusteringP2P config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 36.151636938606956 - task: type: Reranking dataset: type: mteb/stackoverflowdupquestions-reranking name: MTEB StackOverflowDupQuestions config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 52.16607939471187 - type: mrr value: 52.95172046091163 - task: type: Summarization dataset: type: mteb/summeval name: MTEB SummEval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 31.314646669495666 - type: cos_sim_spearman value: 31.83562491439455 - type: dot_pearson value: 31.314590842874157 - type: dot_spearman value: 31.83363065810437 - task: type: Retrieval dataset: type: trec-covid name: MTEB TRECCOVID config: default split: test revision: None metrics: - type: map_at_1 value: 0.198 - type: map_at_10 value: 1.3010000000000002 - type: map_at_100 value: 7.2139999999999995 - type: map_at_1000 value: 20.179 - type: map_at_3 value: 0.528 - type: map_at_5 value: 0.8019999999999999 - type: mrr_at_1 value: 72 - type: mrr_at_10 value: 83.39999999999999 - type: mrr_at_100 value: 83.39999999999999 - type: mrr_at_1000 value: 83.39999999999999 - type: mrr_at_3 value: 81.667 - type: mrr_at_5 value: 83.06700000000001 - type: ndcg_at_1 value: 66 - type: ndcg_at_10 value: 58.059000000000005 - type: ndcg_at_100 value: 44.316 - type: ndcg_at_1000 value: 43.147000000000006 - type: ndcg_at_3 value: 63.815999999999995 - type: ndcg_at_5 value: 63.005 - type: precision_at_1 value: 72 - type: precision_at_10 value: 61.4 - type: precision_at_100 value: 45.62 - type: precision_at_1000 value: 19.866 - type: precision_at_3 value: 70 - type: precision_at_5 value: 68.8 - type: recall_at_1 value: 0.198 - type: recall_at_10 value: 1.517 - type: recall_at_100 value: 10.587 - type: recall_at_1000 value: 41.233 - type: recall_at_3 value: 0.573 - type: recall_at_5 value: 0.907 - task: type: Retrieval dataset: type: webis-touche2020 name: MTEB Touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 1.894 - type: map_at_10 value: 8.488999999999999 - type: map_at_100 value: 14.445 - type: map_at_1000 value: 16.078 - type: map_at_3 value: 4.589 - type: map_at_5 value: 6.019 - type: mrr_at_1 value: 22.448999999999998 - type: mrr_at_10 value: 39.82 - type: mrr_at_100 value: 40.752 - type: mrr_at_1000 value: 40.771 - type: mrr_at_3 value: 34.354 - type: mrr_at_5 value: 37.721 - type: ndcg_at_1 value: 19.387999999999998 - type: ndcg_at_10 value: 21.563 - type: ndcg_at_100 value: 33.857 - type: ndcg_at_1000 value: 46.199 - type: ndcg_at_3 value: 22.296 - type: ndcg_at_5 value: 21.770999999999997 - type: precision_at_1 value: 22.448999999999998 - type: precision_at_10 value: 19.796 - type: precision_at_100 value: 7.142999999999999 - type: precision_at_1000 value: 1.541 - type: precision_at_3 value: 24.490000000000002 - type: precision_at_5 value: 22.448999999999998 - type: recall_at_1 value: 1.894 - type: recall_at_10 value: 14.931 - type: recall_at_100 value: 45.524 - type: recall_at_1000 value: 83.243 - type: recall_at_3 value: 5.712 - type: recall_at_5 value: 8.386000000000001 - task: type: Classification dataset: type: mteb/toxic_conversations_50k name: MTEB ToxicConversationsClassification config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 71.049 - type: ap value: 13.85116971310922 - type: f1 value: 54.37504302487686 - task: type: Classification dataset: type: mteb/tweet_sentiment_extraction name: MTEB TweetSentimentExtractionClassification config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 64.1312959818902 - type: f1 value: 64.11413877009383 - task: type: Clustering dataset: type: mteb/twentynewsgroups-clustering name: MTEB TwentyNewsgroupsClustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 54.13103431861502 - task: type: PairClassification dataset: type: mteb/twittersemeval2015-pairclassification name: MTEB TwitterSemEval2015 config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 87.327889372355 - type: cos_sim_ap value: 77.42059895975699 - type: cos_sim_f1 value: 71.02706903250873 - type: cos_sim_precision value: 69.75324344950394 - type: cos_sim_recall value: 72.34828496042216 - type: dot_accuracy value: 87.327889372355 - type: dot_ap value: 77.4209479346677 - type: dot_f1 value: 71.02706903250873 - type: dot_precision value: 69.75324344950394 - type: dot_recall value: 72.34828496042216 - type: euclidean_accuracy value: 87.327889372355 - type: euclidean_ap value: 77.42096495861037 - type: euclidean_f1 value: 71.02706903250873 - type: euclidean_precision value: 69.75324344950394 - type: euclidean_recall value: 72.34828496042216 - type: manhattan_accuracy value: 87.31000774870358 - type: manhattan_ap value: 77.38930750711619 - type: manhattan_f1 value: 71.07935314027831 - type: manhattan_precision value: 67.70957726295677 - type: manhattan_recall value: 74.80211081794195 - type: max_accuracy value: 87.327889372355 - type: max_ap value: 77.42096495861037 - type: max_f1 value: 71.07935314027831 - task: type: PairClassification dataset: type: mteb/twitterurlcorpus-pairclassification name: MTEB TwitterURLCorpus config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 89.58939729110878 - type: cos_sim_ap value: 87.17594155025475 - type: cos_sim_f1 value: 79.21146953405018 - type: cos_sim_precision value: 76.8918527109307 - type: cos_sim_recall value: 81.67539267015707 - type: dot_accuracy value: 89.58939729110878 - type: dot_ap value: 87.17593963273593 - type: dot_f1 value: 79.21146953405018 - type: dot_precision value: 76.8918527109307 - type: dot_recall value: 81.67539267015707 - type: euclidean_accuracy value: 89.58939729110878 - type: euclidean_ap value: 87.17592466925834 - type: euclidean_f1 value: 79.21146953405018 - type: euclidean_precision value: 76.8918527109307 - type: euclidean_recall value: 81.67539267015707 - type: manhattan_accuracy value: 89.62626615438352 - type: manhattan_ap value: 87.16589873161546 - type: manhattan_f1 value: 79.25143598295348 - type: manhattan_precision value: 76.39494177323712 - type: manhattan_recall value: 82.32984293193716 - type: max_accuracy value: 89.62626615438352 - type: max_ap value: 87.17594155025475 - type: max_f1 value: 79.25143598295348 duplicated_from: hkunlp/instructor-large --- # hkunlp/instructor-large We introduce **Instructor**👨‍🏫, an instruction-finetuned text embedding model that can generate text embeddings tailored to any task (e.g., classification, retrieval, clustering, text evaluation, etc.) and domains (e.g., science, finance, etc.) ***by simply providing the task instruction, without any finetuning***. Instructor👨‍ achieves sota on 70 diverse embedding tasks ([MTEB leaderboard](https://huggingface.co/spaces/mteb/leaderboard))! The model is easy to use with **our customized** `sentence-transformer` library. For more details, check out [our paper](https://arxiv.org/abs/2212.09741) and [project page](https://instructor-embedding.github.io/)! **************************** **Updates** **************************** * 12/28: We released a new [checkpoint](https://huggingface.co/hkunlp/instructor-large) trained with hard negatives, which gives better performance. * 12/21: We released our [paper](https://arxiv.org/abs/2212.09741), [code](https://github.com/HKUNLP/instructor-embedding), [checkpoint](https://huggingface.co/hkunlp/instructor-large) and [project page](https://instructor-embedding.github.io/)! Check them out! ## Quick start <hr /> ## Installation ```bash pip install InstructorEmbedding ``` ## Compute your customized embeddings Then you can use the model like this to calculate domain-specific and task-aware embeddings: ```python from InstructorEmbedding import INSTRUCTOR model = INSTRUCTOR('hkunlp/instructor-large') sentence = "3D ActionSLAM: wearable person tracking in multi-floor environments" instruction = "Represent the Science title:" embeddings = model.encode([[instruction,sentence]]) print(embeddings) ``` ## Use cases <hr /> ## Calculate embeddings for your customized texts If you want to calculate customized embeddings for specific sentences, you may follow the unified template to write instructions: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Represent the `domain` `text_type` for `task_objective`: * `domain` is optional, and it specifies the domain of the text, e.g., science, finance, medicine, etc. * `text_type` is required, and it specifies the encoding unit, e.g., sentence, document, paragraph, etc. * `task_objective` is optional, and it specifies the objective of embedding, e.g., retrieve a document, classify the sentence, etc. ## Calculate Sentence similarities You can further use the model to compute similarities between two groups of sentences, with **customized embeddings**. ```python from sklearn.metrics.pairwise import cosine_similarity sentences_a = [['Represent the Science sentence: ','Parton energy loss in QCD matter'], ['Represent the Financial statement: ','The Federal Reserve on Wednesday raised its benchmark interest rate.']] sentences_b = [['Represent the Science sentence: ','The Chiral Phase Transition in Dissipative Dynamics'], ['Represent the Financial statement: ','The funds rose less than 0.5 per cent on Friday']] embeddings_a = model.encode(sentences_a) embeddings_b = model.encode(sentences_b) similarities = cosine_similarity(embeddings_a,embeddings_b) print(similarities) ``` ## Information Retrieval You can also use **customized embeddings** for information retrieval. ```python import numpy as np from sklearn.metrics.pairwise import cosine_similarity query = [['Represent the Wikipedia question for retrieving supporting documents: ','where is the food stored in a yam plant']] corpus = [['Represent the Wikipedia document for retrieval: ','Capitalism has been dominant in the Western world since the end of feudalism, but most feel[who?] that the term "mixed economies" more precisely describes most contemporary economies, due to their containing both private-owned and state-owned enterprises. In capitalism, prices determine the demand-supply scale. For example, higher demand for certain goods and services lead to higher prices and lower demand for certain goods lead to lower prices.'], ['Represent the Wikipedia document for retrieval: ',"The disparate impact theory is especially controversial under the Fair Housing Act because the Act regulates many activities relating to housing, insurance, and mortgage loans—and some scholars have argued that the theory's use under the Fair Housing Act, combined with extensions of the Community Reinvestment Act, contributed to rise of sub-prime lending and the crash of the U.S. housing market and ensuing global economic recession"], ['Represent the Wikipedia document for retrieval: ','Disparate impact in United States labor law refers to practices in employment, housing, and other areas that adversely affect one group of people of a protected characteristic more than another, even though rules applied by employers or landlords are formally neutral. Although the protected classes vary by statute, most federal civil rights laws protect based on race, color, religion, national origin, and sex as protected traits, and some laws include disability status and other traits as well.']] query_embeddings = model.encode(query) corpus_embeddings = model.encode(corpus) similarities = cosine_similarity(query_embeddings,corpus_embeddings) retrieved_doc_id = np.argmax(similarities) print(retrieved_doc_id) ``` ## Clustering Use **customized embeddings** for clustering texts in groups. ```python import sklearn.cluster sentences = [['Represent the Medicine sentence for clustering: ','Dynamical Scalar Degree of Freedom in Horava-Lifshitz Gravity'], ['Represent the Medicine sentence for clustering: ','Comparison of Atmospheric Neutrino Flux Calculations at Low Energies'], ['Represent the Medicine sentence for clustering: ','Fermion Bags in the Massive Gross-Neveu Model'], ['Represent the Medicine sentence for clustering: ',"QCD corrections to Associated t-tbar-H production at the Tevatron"], ['Represent the Medicine sentence for clustering: ','A New Analysis of the R Measurements: Resonance Parameters of the Higher, Vector States of Charmonium']] embeddings = model.encode(sentences) clustering_model = sklearn.cluster.MiniBatchKMeans(n_clusters=2) clustering_model.fit(embeddings) cluster_assignment = clustering_model.labels_ print(cluster_assignment) ```
chh6/v0TaxiAttempt
chh6
2023-07-12T19:47:12Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-12T19:47:10Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: v0TaxiAttempt results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="chh6/v0TaxiAttempt", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
veluchs/dqn-SpaceInvadersNoFrameskip-v4-4
veluchs
2023-07-12T19:41:22Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-12T19:40:57Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 264.50 +/- 87.36 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga veluchs -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga veluchs -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga veluchs ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 10000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 10000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
Nikhil-HugFace/bert-base-multilingual-cased-finetuned-SQUAD2
Nikhil-HugFace
2023-07-12T19:38:54Z
61
0
transformers
[ "transformers", "tf", "tensorboard", "bert", "question-answering", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-07-12T17:11:27Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Nikhil-HugFace/bert-base-multilingual-cased-finetuned-SQUAD2 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Nikhil-HugFace/bert-base-multilingual-cased-finetuned-SQUAD2 This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.6512 - Train End Logits Accuracy: 0.5819 - Train Start Logits Accuracy: 0.6096 - Validation Loss: 1.3298 - Validation End Logits Accuracy: 0.6339 - Validation Start Logits Accuracy: 0.6896 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 7001, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch | |:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:| | 1.6512 | 0.5819 | 0.6096 | 1.3298 | 0.6339 | 0.6896 | 0 | ### Framework versions - Transformers 4.30.2 - TensorFlow 2.12.0 - Datasets 2.13.1 - Tokenizers 0.13.3
vuiseng9/baseline-ft-mrpc-IRoberta-b-8bit
vuiseng9
2023-07-12T19:21:04Z
6
0
transformers
[ "transformers", "pytorch", "ibert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-12T18:39:16Z
--- language: - en tags: - generated_from_trainer datasets: - glue metrics: - accuracy - f1 model-index: - name: baseline-ft-mrpc-IRoberta-b-8bit results: - task: name: Text Classification type: text-classification dataset: name: GLUE MRPC type: glue config: mrpc split: validation args: mrpc metrics: - name: Accuracy type: accuracy value: 0.8970588235294118 - name: F1 type: f1 value: 0.9257950530035336 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # baseline-ft-mrpc-IRoberta-b-8bit This model is a fine-tuned version of [vuiseng9/baseline-ft-mrpc-IRoberta-b-unquantized](https://huggingface.co/vuiseng9/baseline-ft-mrpc-IRoberta-b-unquantized) on the GLUE MRPC dataset. It achieves the following results on the evaluation set: - Loss: 0.3871 - Accuracy: 0.8971 - F1: 0.9258 - Combined Score: 0.9114 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 12.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:| | 0.0021 | 1.0 | 230 | 0.4017 | 0.8848 | 0.9147 | 0.8998 | | 0.0026 | 2.0 | 460 | 0.4105 | 0.8873 | 0.9173 | 0.9023 | | 0.0026 | 3.0 | 690 | 0.3707 | 0.8946 | 0.9236 | 0.9091 | | 0.0037 | 4.0 | 920 | 0.3893 | 0.8946 | 0.9228 | 0.9087 | | 1.324 | 5.0 | 1150 | 0.3871 | 0.8897 | 0.9204 | 0.9050 | | 0.0227 | 6.0 | 1380 | 0.3951 | 0.8897 | 0.9201 | 0.9049 | | 0.0081 | 7.0 | 1610 | 0.3818 | 0.8824 | 0.9155 | 0.8989 | | 0.0054 | 8.0 | 1840 | 0.3902 | 0.8873 | 0.9181 | 0.9027 | | 0.0383 | 9.0 | 2070 | 0.3659 | 0.8922 | 0.9225 | 0.9073 | | 0.3861 | 10.0 | 2300 | 0.4260 | 0.8652 | 0.9030 | 0.8841 | | 0.0028 | 11.0 | 2530 | 0.3619 | 0.8946 | 0.9234 | 0.9090 | | 0.0957 | 12.0 | 2760 | 0.3871 | 0.8971 | 0.9258 | 0.9114 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.11.0 - Tokenizers 0.13.3
Ruborobot/distilbert-base-uncased-finetuned-TeacherMomentsConfusion
Ruborobot
2023-07-12T19:16:03Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-11T18:44:27Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - precision - recall - f1 model-index: - name: distilbert-base-uncased-finetuned-TeacherMomentsConfusion results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-TeacherMomentsConfusion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6691 - Accuracy: 0.7517 - Precision: 0.1790 - Recall: 0.2359 - F1: 0.2035 - Balanced Accuracy: 0.5339 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Balanced Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-----------------:| | No log | 1.0 | 295 | 0.6717 | 0.8655 | 0.0 | 0.0 | 0.0 | 0.5 | | 0.6903 | 2.0 | 590 | 0.6691 | 0.7517 | 0.1790 | 0.2359 | 0.2035 | 0.5339 | | 0.6903 | 3.0 | 885 | 0.7994 | 0.7076 | 0.1602 | 0.2769 | 0.2030 | 0.5257 | | 0.5787 | 4.0 | 1180 | 1.0224 | 0.6317 | 0.1576 | 0.4 | 0.2261 | 0.5339 | | 0.5787 | 5.0 | 1475 | 1.5546 | 0.7621 | 0.1528 | 0.1692 | 0.1606 | 0.5117 | | 0.3142 | 6.0 | 1770 | 2.0188 | 0.7724 | 0.1271 | 0.1179 | 0.1223 | 0.4960 | | 0.1212 | 7.0 | 2065 | 2.4508 | 0.8014 | 0.1157 | 0.0718 | 0.0886 | 0.4933 | | 0.1212 | 8.0 | 2360 | 2.7545 | 0.8138 | 0.1287 | 0.0667 | 0.0878 | 0.4983 | | 0.0543 | 9.0 | 2655 | 2.8085 | 0.7876 | 0.1258 | 0.0974 | 0.1098 | 0.4961 | | 0.0543 | 10.0 | 2950 | 2.8602 | 0.7903 | 0.1342 | 0.1026 | 0.1163 | 0.4999 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
malik463/arain
malik463
2023-07-12T19:05:44Z
0
0
null
[ "arxiv:1910.09700", "license:openrail", "region:us" ]
null
2023-07-12T19:04:10Z
--- license: openrail --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
gmurillo/set-fit-goup-5-f
gmurillo
2023-07-12T18:58:48Z
3
0
sentence-transformers
[ "sentence-transformers", "pytorch", "bart", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
2023-07-12T18:57:36Z
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # gmurillo/set-fit-goup-5-f This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("gmurillo/set-fit-goup-5-f") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
gokuls/sa_bert_12_layer_modified_complete_training_48_v2
gokuls
2023-07-12T18:58:21Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "hybridbert", "fill-mask", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-07-10T18:19:08Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: sa_bert_12_layer_modified_complete_training_48_v2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sa_bert_12_layer_modified_complete_training_48_v2 This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.9821 - Accuracy: 0.3685 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10000 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:------:|:---------------:|:--------:| | 6.5933 | 0.05 | 10000 | 6.5711 | 0.1226 | | 6.1523 | 0.11 | 20000 | 6.3425 | 0.1396 | | 6.1308 | 0.16 | 30000 | 6.2468 | 0.1444 | | 6.2297 | 0.22 | 40000 | 6.1895 | 0.1468 | | 6.1484 | 0.27 | 50000 | 6.1483 | 0.1487 | | 6.0591 | 0.33 | 60000 | 6.1205 | 0.1492 | | 6.0199 | 0.38 | 70000 | 6.0862 | 0.1501 | | 5.8666 | 0.44 | 80000 | 5.8875 | 0.1600 | | 5.9153 | 0.49 | 90000 | 5.7648 | 0.1722 | | 5.5197 | 0.55 | 100000 | 5.6349 | 0.1891 | | 5.4384 | 0.6 | 110000 | 5.5023 | 0.2051 | | 5.3973 | 0.66 | 120000 | 5.3651 | 0.2209 | | 5.2627 | 0.71 | 130000 | 5.2054 | 0.2395 | | 5.3179 | 0.76 | 140000 | 5.0131 | 0.2621 | | 4.8813 | 0.82 | 150000 | 4.7153 | 0.2949 | | 4.6653 | 0.87 | 160000 | 4.4651 | 0.3209 | | 4.7227 | 0.93 | 170000 | 4.1752 | 0.3502 | | 4.2892 | 0.98 | 180000 | 3.9821 | 0.3685 | ### Framework versions - Transformers 4.30.2 - Pytorch 1.14.0a0+410ce96 - Datasets 2.13.1 - Tokenizers 0.13.3
komo-dono/dashiegames
komo-dono
2023-07-12T18:44:13Z
0
0
null
[ "region:us" ]
null
2023-07-12T18:42:38Z
--- license: openrail language: - en tags: - music dashiegames 500 epoch
GodRain/WizardCoder-15B-V1.1-4bit
GodRain
2023-07-12T18:40:18Z
5
2
transformers
[ "transformers", "llama", "text-generation", "en", "dataset:WizardLM/WizardLM_evol_instruct_70k", "arxiv:2304.12244", "license:bigcode-openrail-m", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-07-12T18:00:51Z
--- license: bigcode-openrail-m datasets: - WizardLM/WizardLM_evol_instruct_70k language: - en --- <font size=5>Here is an example to show how to use model quantized by auto_gptq</font> ``` _4BITS_MODEL_PATH_V1_ = 'GodRain/WizardCoder-15B-V1.1-4bit' # pip install auto_gptq from auto_gptq import AutoGPTQForCausalLM from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained(_4BITS_MODEL_PATH_V1_) model = AutoGPTQForCausalLM.from_quantized(_4BITS_MODEL_PATH_V1_) out = evaluate("Hello, tell me a story about sun", model=model, tokenizer=tokenizer) print(out[0].strip()) ``` ``` def evaluate( batch_data, tokenizer, model, temperature=1, top_p=0.9, top_k=40, num_beams=1, max_new_tokens=2048, **kwargs, ): prompts = generate_prompt(batch_data) inputs = tokenizer(prompts, return_tensors="pt", max_length=256, truncation=True) input_ids = inputs["input_ids"].to(device) generation_config = GenerationConfig( temperature=temperature, top_p=top_p, top_k=top_k, num_beams=num_beams, eos_token_id=tokenizer.eos_token_id, pad_token_id=tokenizer.pad_token_id, **kwargs, ) with torch.no_grad(): generation_output = model.generate( input_ids=input_ids, generation_config=generation_config, return_dict_in_generate=True, output_scores=True, max_new_tokens=max_new_tokens, ) s = generation_output.sequences output = tokenizer.batch_decode(s, skip_special_tokens=True) return output ``` Citiation: ``` @misc{xu2023wizardlm, title={WizardLM: Empowering Large Language Models to Follow Complex Instructions}, author={Can Xu and Qingfeng Sun and Kai Zheng and Xiubo Geng and Pu Zhao and Jiazhan Feng and Chongyang Tao and Daxin Jiang}, year={2023}, eprint={2304.12244}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
GodRain/WizardCoder-15B-V1.1-3bit
GodRain
2023-07-12T18:39:59Z
3
0
transformers
[ "transformers", "llama", "text-generation", "dataset:WizardLM/WizardLM_evol_instruct_70k", "arxiv:2304.12244", "license:bigcode-openrail-m", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-07-12T17:45:29Z
--- license: bigcode-openrail-m datasets: - WizardLM/WizardLM_evol_instruct_70k --- <font size=5>Here is an example to show how to use model quantized by auto_gptq</font> ``` _3BITS_MODEL_PATH_V1_ = 'GodRain/WizardCoder-15B-V1.1-3bit' # pip install auto_gptq from auto_gptq import AutoGPTQForCausalLM from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained(_3BITS_MODEL_PATH_V1_) model = AutoGPTQForCausalLM.from_quantized(_3BITS_MODEL_PATH_V1_) out = evaluate("Hello, tell me a story about sun", model=model, tokenizer=tokenizer) print(out[0].strip()) ``` ``` def evaluate( batch_data, tokenizer, model, temperature=1, top_p=0.9, top_k=40, num_beams=1, max_new_tokens=2048, **kwargs, ): prompts = generate_prompt(batch_data) inputs = tokenizer(prompts, return_tensors="pt", max_length=256, truncation=True) input_ids = inputs["input_ids"].to(device) generation_config = GenerationConfig( temperature=temperature, top_p=top_p, top_k=top_k, num_beams=num_beams, eos_token_id=tokenizer.eos_token_id, pad_token_id=tokenizer.pad_token_id, **kwargs, ) with torch.no_grad(): generation_output = model.generate( input_ids=input_ids, generation_config=generation_config, return_dict_in_generate=True, output_scores=True, max_new_tokens=max_new_tokens, ) s = generation_output.sequences output = tokenizer.batch_decode(s, skip_special_tokens=True) return output ``` Citiation: ``` @misc{xu2023wizardlm, title={WizardLM: Empowering Large Language Models to Follow Complex Instructions}, author={Can Xu and Qingfeng Sun and Kai Zheng and Xiubo Geng and Pu Zhao and Jiazhan Feng and Chongyang Tao and Daxin Jiang}, year={2023}, eprint={2304.12244}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
tyavika/LR1E4-BS16-Distilbert-QA-Pytorch-FULL
tyavika
2023-07-12T18:39:38Z
105
0
transformers
[ "transformers", "pytorch", "distilbert", "question-answering", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-07-07T04:59:00Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: LR1E4-BS16-Distilbert-QA-Pytorch-FULL results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # LR1E4-BS16-Distilbert-QA-Pytorch-FULL This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3888 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.4071 | 1.0 | 3290 | 1.2792 | | 1.0123 | 2.0 | 6580 | 1.2843 | | 0.6916 | 3.0 | 9870 | 1.3888 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
vuiseng9/baseline-ft-mrpc-IRoberta-b-unquantized
vuiseng9
2023-07-12T18:33:30Z
107
0
transformers
[ "transformers", "pytorch", "ibert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-12T18:24:52Z
--- language: - en tags: - generated_from_trainer datasets: - glue metrics: - accuracy - f1 model-index: - name: baseline-ft-mrpc-IRoberta-b-unquantized results: - task: name: Text Classification type: text-classification dataset: name: GLUE MRPC type: glue config: mrpc split: validation args: mrpc metrics: - name: Accuracy type: accuracy value: 0.8995098039215687 - name: F1 type: f1 value: 0.9266547406082289 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # baseline-ft-mrpc-IRoberta-b-unquantized This model is a fine-tuned version of [kssteven/ibert-roberta-base](https://huggingface.co/kssteven/ibert-roberta-base) on the GLUE MRPC dataset. It achieves the following results on the evaluation set: - Loss: 0.5354 - Accuracy: 0.8995 - F1: 0.9267 - Combined Score: 0.9131 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:| | 0.1212 | 1.0 | 230 | 0.3401 | 0.8799 | 0.9136 | 0.8967 | | 0.0347 | 2.0 | 460 | 0.3085 | 0.8676 | 0.9059 | 0.8868 | | 0.0495 | 3.0 | 690 | 0.3552 | 0.8848 | 0.9174 | 0.9011 | | 0.0024 | 4.0 | 920 | 0.4960 | 0.8824 | 0.9158 | 0.8991 | | 0.0046 | 5.0 | 1150 | 0.5354 | 0.8995 | 0.9267 | 0.9131 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.11.0 - Tokenizers 0.13.3
hsultanbey/autocomplete_trainer
hsultanbey
2023-07-12T18:23:42Z
143
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-12T18:22:39Z
--- license: mit tags: - generated_from_trainer model-index: - name: autocomplete_trainer results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # autocomplete_trainer This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 2 ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.13.3
t23e2/poca-SoccerTwos
t23e2
2023-07-12T18:20:17Z
1
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SoccerTwos", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos", "region:us" ]
reinforcement-learning
2023-07-12T18:20:11Z
--- library_name: ml-agents tags: - SoccerTwos - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: t23e2/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
asrimanth/person-thumbs-up-lora
asrimanth
2023-07-12T18:19:11Z
2
3
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "lora", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-07-12T18:18:41Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA text2image fine-tuning - asrimanth/person-thumbs-up-lora These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the Custom dataset dataset. You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png)
Danish-summarisation/DanSumT5-pilot
Danish-summarisation
2023-07-12T18:12:28Z
122
2
transformers
[ "transformers", "pytorch", "safetensors", "mt5", "text2text-generation", "summarization", "da", "arxiv:1804.11283", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
2022-07-05T10:06:53Z
--- language: - da tags: - summarization widget: - text: "De strejkende SAS-piloter melder sig nu klar til gøre en undtagelse fra strejken for at hente strandede chartergæster hjem fra flere ferieområder. Undtagelsen skal gælde nogle uger frem, men piloterne vil under ingen omstændigheder have nye gæster med sig ned til de samme destinationer. Det skriver SAS Pilot Group i en pressemeddelelse. - Vi forstår, at det er uundgåeligt, at vores passagerer bliver ramt af strejken. Men vi piloter er altid fokuseret på at opføre os ansvarligt med passagersikkerheden som højeste prioritet, siger Martin Lindgren, der er formand for SAS Pilot Group i Norden. Men for at hjælpe strandede gæster kræver de strejkende piloter samtidig, at SAS' trækker sin lockout af piloterne tilbage. Samtidig ser SAS Pilot Group det som en forudsætning, at SAS ikke får hjælp fra andre flyselskaber til at flyve nye passagerer til de samme destinationer, som piloterne tilbyder at flyve gæster hjem fra, skriver fagforeningen." example_title: "Example 1" - text: "Mere end 21.000 krigsforbrydelser. Så mange efterforsker de ukrainske myndigheder lige nu ifølge den ukrainske rigsadvokat, Iryna Venediktova. Hun oplyser til britiske BBC, at der bliver anmeldt mellem 200 og 300 nye sager om dagen. Forbrydelserne er ifølge Venediktova svære at efterforske, fordi det kan være vanskeligt at komme frem til de relevante områder og mennesker. Men hun understreger overfor BBC, at russiske soldater, der har dræbt, tortureret eller voldtaget civile, bør forstå, at det kun er et spørgsmål om tid, før de alle vil komme for retten. Rusland er blevet anklaget for en lang række krigsforbrydelser, siden landet invaderede Ukraine den 24. februar, men afviser alle anklager." example_title: "Example 2" - text: "Det nye studie Cognitive Science på Aarhus Universitet, som i år havde Østjyllands højeste adgangskrav på 11,7 i karaktergennemsnit, udklækker det første hold bachelorer til sommer. Men når de skal læse videre på kandidaten må de til udlandet, hvis ikke de vil skifte til et andet fag. Aarhus Universitet kan nemlig ikke nå at oprette en kandidat i Cognitive Science til næste sommer, hvor det første hold bachelorer er færdige. Det rammer blandt andre Julie Sohn, der startede på uddannelsen i sommeren 2015, og derfor kun mangler et år, før hun er bachelor. - Jeg synes, at det er ærgerligt, at vi som nye studerende på et populært studie ikke kan tage en kandidat i Danmark, siger hun. Bacheloruddannelsen i Cognitive Science blev oprettet af Aarhus Universitet i 2015, og uddannelsen kombinerer viden om menneskelig adfærd med avanceret statistik. Da der endnu ikke er oprettet en kandidatuddannelse indenfor dette område, har Julie Sohn i stedet mulighed for at læse en kandidatgrad i for eksempel informationsvidenskab. Hun vil dog hellere fortsætte på Cognitive Science, og derfor overvejer hun nu at læse videre i udlandet. - Det ser ud til, at det er den eneste mulighed, hvis man gerne vil læse videre på noget, der faktisk passer ind til vores studie, siger hun. Nye regler giver forsinkelse På Aarhus Universitet havde man håbet på at have kandidatuddannelsen klar, når det første hold bachelorer bliver færdige til sommer. Arbejdet er dog blevet forsinket, fordi der er kommet nye regler for, hvornår man må oprette en uddannelse, fortæller Niels Lehmann, prodekan på fakultetet Arts, som Cognitive Science hører under. Det er nogle meget dygtige studerende, der kommer ind på uddannelsen, og det er klart, at de i et vist omfang vil orientere sig mod udlandet, hvor man så kan forestille sig, at de bider sig fast. NIELS LEHMANN, PRODEKAN, AARHUS UNIVERSITET Tidligere skulle Danmarks Akkrediteringsinstitution se alle nye uddannelser efter i sømmene for at sikre, at kvaliteten var i orden. Nu skal uddannelsesinstitutionerne selv stå for det kvalitetstjek. Men det tjek har Aarhus Universitet endnu ikke fået grønt lys til selv at udføre, fortæller prodekanen. - Vi ville meget gerne have kunnet nå at få et udbud på kandidaten i gang i 2018, men så længe man er under institutionsakkreditering, så kan man ikke ansøge om nye uddannelser, siger han. Det er endnu usikkert, hvornår Aarhus Universitet kan oprette kandidaten i Cognitive Science. Hvis de får alle de nødvendige godkendelser, kan den tidligst være klar i 2019. Prodekan Niels Lehmann frygter, at Danmark kommer til at miste nogle af landets skarpeste studerende, hvis de er nødt til at rejse til udlandet for at gøre deres uddannelse færdig. - Det er nogle meget, meget dygtige studerende, der kommer ind på denne uddannelse, og det er klart, at de i et vist omfang vil orientere sig mod udlandet, hvor man så kan forestille sig, at de bider sig fast, siger han. Hos Danmarks Akkrediteringsinstitution forstår man godt, at universitets ansatte og studenrede ærgrer sig. - Jeg kan godt forstå, at Aarhus Universitet ærgrer sig over, at det trækker ud, og at der går noget tid, før man får mulighed for at oprette nye uddannelser, og at man ikke har fået den genvej til at oprette nye uddannelser, som ville være fuldt med, hvis man havde opnået en positiv institutionsakkreditering, siger kommunikationsansvarlig Daniel Sebastian Larsen. I år var Cognitive Science i Aarhus den uddannelse i Danmark, der havde det fjerde højeste karakterkrav - det højeste var 'AP Graduate in Marketing Management' på Erhvervsakademi Sjælland med et krav på 12,3." example_title: "Example 3" --- # mT5-base fine-tuned for News article Summarisation ✏️🧾 [Google's mT5](https://aclanthology.org/2021.naacl-main.41/) for **summarisation** downstream task. # Model summary This repository contains a model for Danish abstractive summarisation of news articles. The summariser is based on a language-specific mT5-base, where the vocabulary is condensed to include tokens used in Danish and English. The model is fine-tuned using an abstractive subset of the DaNewsroom dataset (Varab & Schluter, 2020), according to the binned density categories employed in Newsroom (Grusky et al., 2019). # References Grusky, M., Naaman, M., & Artzi, Y. (2018). Newsroom: A Dataset of 1.3 Million Summaries with Diverse Extractive Strategies. ArXiv:1804.11283 [Cs]. http://arxiv.org/abs/1804.11283 Varab, D., & Schluter, N. (2020). DaNewsroom: A Large-scale Danish Summarisation Dataset. Proceedings of the 12th Language Resources and Evaluation Conference, 6731–6739. https://aclanthology.org/2020.lrec-1.831
arstep/q-FrozenLake-v1-4x4-noSlippery
arstep
2023-07-12T18:12:13Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-12T18:12:10Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="arstep/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
jordyvl/vit-tiny_rvl_cdip_100_examples_per_class_kd_CEKD_t5.0_a0.9
jordyvl
2023-07-12T18:09:45Z
163
0
transformers
[ "transformers", "pytorch", "vit", "image-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-07-12T17:31:18Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: vit-tiny_rvl_cdip_100_examples_per_class_kd_CEKD_t5.0_a0.9 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-tiny_rvl_cdip_100_examples_per_class_kd_CEKD_t5.0_a0.9 This model is a fine-tuned version of [WinKawaks/vit-tiny-patch16-224](https://huggingface.co/WinKawaks/vit-tiny-patch16-224) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.5750 - Accuracy: 0.5325 - Brier Loss: 0.5990 - Nll: 2.5263 - F1 Micro: 0.5325 - F1 Macro: 0.5240 - Ece: 0.1659 - Aurc: 0.2152 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:| | No log | 1.0 | 7 | 3.9285 | 0.04 | 1.0722 | 7.3572 | 0.04 | 0.0319 | 0.2792 | 0.9556 | | No log | 2.0 | 14 | 3.0027 | 0.095 | 0.9510 | 5.8779 | 0.095 | 0.0766 | 0.1668 | 0.8900 | | No log | 3.0 | 21 | 2.6988 | 0.225 | 0.8896 | 5.3750 | 0.225 | 0.1801 | 0.1669 | 0.6473 | | No log | 4.0 | 28 | 2.3179 | 0.285 | 0.8016 | 3.5658 | 0.285 | 0.2657 | 0.1679 | 0.4916 | | No log | 5.0 | 35 | 2.0566 | 0.37 | 0.7203 | 2.9834 | 0.37 | 0.3493 | 0.1684 | 0.3612 | | No log | 6.0 | 42 | 1.9505 | 0.4325 | 0.6892 | 3.0719 | 0.4325 | 0.4127 | 0.1775 | 0.3084 | | No log | 7.0 | 49 | 1.9995 | 0.4375 | 0.7008 | 3.1569 | 0.4375 | 0.4084 | 0.2032 | 0.3103 | | No log | 8.0 | 56 | 1.9133 | 0.445 | 0.6906 | 2.8574 | 0.445 | 0.4464 | 0.2016 | 0.3062 | | No log | 9.0 | 63 | 1.9876 | 0.4625 | 0.6918 | 2.9267 | 0.4625 | 0.4538 | 0.2228 | 0.2868 | | No log | 10.0 | 70 | 2.0051 | 0.4725 | 0.6971 | 2.9249 | 0.4725 | 0.4553 | 0.2234 | 0.2814 | | No log | 11.0 | 77 | 2.1834 | 0.465 | 0.7319 | 2.9998 | 0.465 | 0.4426 | 0.2444 | 0.3006 | | No log | 12.0 | 84 | 1.9953 | 0.4825 | 0.7087 | 2.7128 | 0.4825 | 0.4731 | 0.2386 | 0.2980 | | No log | 13.0 | 91 | 1.8834 | 0.4975 | 0.6771 | 2.6879 | 0.4975 | 0.4954 | 0.2240 | 0.2748 | | No log | 14.0 | 98 | 1.9647 | 0.4675 | 0.6987 | 2.8305 | 0.4675 | 0.4429 | 0.2409 | 0.2902 | | No log | 15.0 | 105 | 1.8810 | 0.5 | 0.6785 | 2.6402 | 0.5 | 0.4847 | 0.2171 | 0.2725 | | No log | 16.0 | 112 | 1.8777 | 0.4875 | 0.6877 | 2.6940 | 0.4875 | 0.4871 | 0.2210 | 0.2846 | | No log | 17.0 | 119 | 1.9260 | 0.4925 | 0.6796 | 2.7055 | 0.4925 | 0.4834 | 0.2012 | 0.2744 | | No log | 18.0 | 126 | 1.7864 | 0.505 | 0.6547 | 2.6724 | 0.505 | 0.4912 | 0.2081 | 0.2434 | | No log | 19.0 | 133 | 1.7618 | 0.4975 | 0.6430 | 2.5951 | 0.4975 | 0.4915 | 0.2172 | 0.2490 | | No log | 20.0 | 140 | 1.7496 | 0.515 | 0.6513 | 2.5263 | 0.515 | 0.5025 | 0.1975 | 0.2502 | | No log | 21.0 | 147 | 1.7082 | 0.5275 | 0.6438 | 2.4039 | 0.5275 | 0.5224 | 0.2017 | 0.2450 | | No log | 22.0 | 154 | 1.7482 | 0.4975 | 0.6682 | 2.5194 | 0.4975 | 0.4911 | 0.2247 | 0.2571 | | No log | 23.0 | 161 | 1.7377 | 0.5075 | 0.6482 | 2.4136 | 0.5075 | 0.4900 | 0.2221 | 0.2396 | | No log | 24.0 | 168 | 1.7094 | 0.515 | 0.6372 | 2.5605 | 0.515 | 0.5083 | 0.2137 | 0.2474 | | No log | 25.0 | 175 | 1.6884 | 0.5175 | 0.6422 | 2.5270 | 0.5175 | 0.5104 | 0.2111 | 0.2444 | | No log | 26.0 | 182 | 1.6489 | 0.5275 | 0.6246 | 2.5344 | 0.5275 | 0.5211 | 0.2066 | 0.2333 | | No log | 27.0 | 189 | 1.6165 | 0.53 | 0.6191 | 2.5418 | 0.53 | 0.5256 | 0.2021 | 0.2305 | | No log | 28.0 | 196 | 1.6316 | 0.5275 | 0.6181 | 2.6568 | 0.5275 | 0.5212 | 0.2004 | 0.2300 | | No log | 29.0 | 203 | 1.6595 | 0.5175 | 0.6306 | 2.4298 | 0.5175 | 0.5096 | 0.2020 | 0.2427 | | No log | 30.0 | 210 | 1.6193 | 0.5325 | 0.6157 | 2.5455 | 0.5325 | 0.5272 | 0.1779 | 0.2278 | | No log | 31.0 | 217 | 1.6517 | 0.5325 | 0.6274 | 2.4579 | 0.5325 | 0.5259 | 0.2006 | 0.2362 | | No log | 32.0 | 224 | 1.6434 | 0.5325 | 0.6167 | 2.5805 | 0.5325 | 0.5229 | 0.1995 | 0.2273 | | No log | 33.0 | 231 | 1.6660 | 0.5225 | 0.6269 | 2.6794 | 0.5225 | 0.5132 | 0.2244 | 0.2283 | | No log | 34.0 | 238 | 1.6353 | 0.515 | 0.6194 | 2.6085 | 0.515 | 0.5069 | 0.1839 | 0.2303 | | No log | 35.0 | 245 | 1.5920 | 0.5325 | 0.6051 | 2.5645 | 0.5325 | 0.5248 | 0.1868 | 0.2208 | | No log | 36.0 | 252 | 1.5909 | 0.54 | 0.6028 | 2.4786 | 0.54 | 0.5323 | 0.1902 | 0.2194 | | No log | 37.0 | 259 | 1.5730 | 0.5425 | 0.5983 | 2.4877 | 0.5425 | 0.5368 | 0.1799 | 0.2177 | | No log | 38.0 | 266 | 1.5800 | 0.535 | 0.6029 | 2.4736 | 0.535 | 0.5282 | 0.1761 | 0.2196 | | No log | 39.0 | 273 | 1.5594 | 0.54 | 0.5955 | 2.5093 | 0.54 | 0.5327 | 0.1900 | 0.2126 | | No log | 40.0 | 280 | 1.5685 | 0.53 | 0.5979 | 2.6068 | 0.53 | 0.5208 | 0.1893 | 0.2173 | | No log | 41.0 | 287 | 1.5757 | 0.53 | 0.5995 | 2.5655 | 0.53 | 0.5218 | 0.1862 | 0.2164 | | No log | 42.0 | 294 | 1.5797 | 0.535 | 0.6039 | 2.5445 | 0.535 | 0.5273 | 0.1834 | 0.2182 | | No log | 43.0 | 301 | 1.5900 | 0.53 | 0.6074 | 2.5201 | 0.53 | 0.5189 | 0.1747 | 0.2206 | | No log | 44.0 | 308 | 1.5760 | 0.5325 | 0.5986 | 2.4974 | 0.5325 | 0.5225 | 0.1870 | 0.2148 | | No log | 45.0 | 315 | 1.5768 | 0.53 | 0.6013 | 2.5174 | 0.53 | 0.5204 | 0.1979 | 0.2158 | | No log | 46.0 | 322 | 1.5774 | 0.53 | 0.6011 | 2.5199 | 0.53 | 0.5206 | 0.1882 | 0.2165 | | No log | 47.0 | 329 | 1.5714 | 0.54 | 0.5983 | 2.5329 | 0.54 | 0.5303 | 0.1884 | 0.2135 | | No log | 48.0 | 336 | 1.5834 | 0.5325 | 0.6026 | 2.5253 | 0.5325 | 0.5238 | 0.1658 | 0.2190 | | No log | 49.0 | 343 | 1.5724 | 0.5375 | 0.5979 | 2.5569 | 0.5375 | 0.5299 | 0.1617 | 0.2151 | | No log | 50.0 | 350 | 1.5685 | 0.5375 | 0.5985 | 2.5189 | 0.5375 | 0.5285 | 0.1919 | 0.2151 | | No log | 51.0 | 357 | 1.5708 | 0.54 | 0.5986 | 2.5002 | 0.54 | 0.5305 | 0.1755 | 0.2149 | | No log | 52.0 | 364 | 1.5665 | 0.535 | 0.5977 | 2.5224 | 0.535 | 0.5267 | 0.1842 | 0.2160 | | No log | 53.0 | 371 | 1.5713 | 0.5325 | 0.5993 | 2.5515 | 0.5325 | 0.5250 | 0.1753 | 0.2160 | | No log | 54.0 | 378 | 1.5693 | 0.535 | 0.5986 | 2.5516 | 0.535 | 0.5276 | 0.1841 | 0.2158 | | No log | 55.0 | 385 | 1.5693 | 0.5375 | 0.5984 | 2.5190 | 0.5375 | 0.5285 | 0.1842 | 0.2144 | | No log | 56.0 | 392 | 1.5725 | 0.535 | 0.5992 | 2.5527 | 0.535 | 0.5262 | 0.1776 | 0.2150 | | No log | 57.0 | 399 | 1.5674 | 0.5425 | 0.5976 | 2.5502 | 0.5425 | 0.5326 | 0.1902 | 0.2137 | | No log | 58.0 | 406 | 1.5675 | 0.5375 | 0.5974 | 2.5517 | 0.5375 | 0.5288 | 0.1794 | 0.2139 | | No log | 59.0 | 413 | 1.5713 | 0.535 | 0.5988 | 2.5515 | 0.535 | 0.5257 | 0.1791 | 0.2147 | | No log | 60.0 | 420 | 1.5729 | 0.535 | 0.5988 | 2.5512 | 0.535 | 0.5262 | 0.1796 | 0.2148 | | No log | 61.0 | 427 | 1.5702 | 0.5375 | 0.5976 | 2.5521 | 0.5375 | 0.5281 | 0.1817 | 0.2139 | | No log | 62.0 | 434 | 1.5728 | 0.535 | 0.5988 | 2.5514 | 0.535 | 0.5266 | 0.1722 | 0.2149 | | No log | 63.0 | 441 | 1.5720 | 0.5325 | 0.5985 | 2.5206 | 0.5325 | 0.5231 | 0.1790 | 0.2149 | | No log | 64.0 | 448 | 1.5704 | 0.5325 | 0.5975 | 2.5510 | 0.5325 | 0.5236 | 0.1706 | 0.2139 | | No log | 65.0 | 455 | 1.5724 | 0.5325 | 0.5986 | 2.5225 | 0.5325 | 0.5236 | 0.1557 | 0.2148 | | No log | 66.0 | 462 | 1.5718 | 0.5325 | 0.5985 | 2.5246 | 0.5325 | 0.5241 | 0.1772 | 0.2148 | | No log | 67.0 | 469 | 1.5710 | 0.5325 | 0.5981 | 2.5511 | 0.5325 | 0.5237 | 0.1625 | 0.2146 | | No log | 68.0 | 476 | 1.5716 | 0.54 | 0.5981 | 2.5001 | 0.54 | 0.5304 | 0.1622 | 0.2141 | | No log | 69.0 | 483 | 1.5732 | 0.5325 | 0.5988 | 2.5517 | 0.5325 | 0.5232 | 0.1641 | 0.2150 | | No log | 70.0 | 490 | 1.5733 | 0.5325 | 0.5987 | 2.5522 | 0.5325 | 0.5237 | 0.1715 | 0.2149 | | No log | 71.0 | 497 | 1.5729 | 0.5325 | 0.5985 | 2.5523 | 0.5325 | 0.5241 | 0.1670 | 0.2147 | | 0.3153 | 72.0 | 504 | 1.5730 | 0.5325 | 0.5987 | 2.5236 | 0.5325 | 0.5237 | 0.1656 | 0.2149 | | 0.3153 | 73.0 | 511 | 1.5723 | 0.5325 | 0.5985 | 2.5212 | 0.5325 | 0.5238 | 0.1893 | 0.2145 | | 0.3153 | 74.0 | 518 | 1.5738 | 0.5325 | 0.5989 | 2.5515 | 0.5325 | 0.5238 | 0.1744 | 0.2147 | | 0.3153 | 75.0 | 525 | 1.5740 | 0.5325 | 0.5988 | 2.5318 | 0.5325 | 0.5237 | 0.1683 | 0.2150 | | 0.3153 | 76.0 | 532 | 1.5734 | 0.535 | 0.5985 | 2.5525 | 0.535 | 0.5261 | 0.1763 | 0.2145 | | 0.3153 | 77.0 | 539 | 1.5740 | 0.5325 | 0.5989 | 2.5516 | 0.5325 | 0.5243 | 0.1726 | 0.2149 | | 0.3153 | 78.0 | 546 | 1.5738 | 0.5325 | 0.5987 | 2.5289 | 0.5325 | 0.5241 | 0.1692 | 0.2148 | | 0.3153 | 79.0 | 553 | 1.5736 | 0.5325 | 0.5987 | 2.5255 | 0.5325 | 0.5242 | 0.1807 | 0.2147 | | 0.3153 | 80.0 | 560 | 1.5739 | 0.5325 | 0.5988 | 2.5522 | 0.5325 | 0.5237 | 0.1769 | 0.2150 | | 0.3153 | 81.0 | 567 | 1.5743 | 0.5325 | 0.5989 | 2.5519 | 0.5325 | 0.5238 | 0.1837 | 0.2151 | | 0.3153 | 82.0 | 574 | 1.5742 | 0.5325 | 0.5989 | 2.5232 | 0.5325 | 0.5240 | 0.1712 | 0.2149 | | 0.3153 | 83.0 | 581 | 1.5744 | 0.5325 | 0.5989 | 2.5256 | 0.5325 | 0.5239 | 0.1803 | 0.2151 | | 0.3153 | 84.0 | 588 | 1.5741 | 0.5325 | 0.5988 | 2.5233 | 0.5325 | 0.5233 | 0.1655 | 0.2147 | | 0.3153 | 85.0 | 595 | 1.5747 | 0.5325 | 0.5990 | 2.5274 | 0.5325 | 0.5237 | 0.1696 | 0.2152 | | 0.3153 | 86.0 | 602 | 1.5747 | 0.5325 | 0.5989 | 2.5263 | 0.5325 | 0.5238 | 0.1689 | 0.2150 | | 0.3153 | 87.0 | 609 | 1.5745 | 0.5325 | 0.5989 | 2.5251 | 0.5325 | 0.5237 | 0.1654 | 0.2149 | | 0.3153 | 88.0 | 616 | 1.5747 | 0.5325 | 0.5989 | 2.5283 | 0.5325 | 0.5241 | 0.1693 | 0.2151 | | 0.3153 | 89.0 | 623 | 1.5748 | 0.5325 | 0.5990 | 2.5275 | 0.5325 | 0.5239 | 0.1596 | 0.2152 | | 0.3153 | 90.0 | 630 | 1.5749 | 0.5325 | 0.5990 | 2.5278 | 0.5325 | 0.5240 | 0.1602 | 0.2151 | | 0.3153 | 91.0 | 637 | 1.5750 | 0.5325 | 0.5990 | 2.5337 | 0.5325 | 0.5239 | 0.1623 | 0.2152 | | 0.3153 | 92.0 | 644 | 1.5749 | 0.5325 | 0.5990 | 2.5272 | 0.5325 | 0.5238 | 0.1653 | 0.2151 | | 0.3153 | 93.0 | 651 | 1.5751 | 0.5325 | 0.5990 | 2.5281 | 0.5325 | 0.5240 | 0.1663 | 0.2149 | | 0.3153 | 94.0 | 658 | 1.5750 | 0.5325 | 0.5990 | 2.5249 | 0.5325 | 0.5239 | 0.1715 | 0.2152 | | 0.3153 | 95.0 | 665 | 1.5749 | 0.535 | 0.5990 | 2.5257 | 0.535 | 0.5263 | 0.1625 | 0.2149 | | 0.3153 | 96.0 | 672 | 1.5750 | 0.5325 | 0.5990 | 2.5266 | 0.5325 | 0.5239 | 0.1655 | 0.2151 | | 0.3153 | 97.0 | 679 | 1.5750 | 0.5325 | 0.5990 | 2.5268 | 0.5325 | 0.5239 | 0.1686 | 0.2152 | | 0.3153 | 98.0 | 686 | 1.5750 | 0.5325 | 0.5990 | 2.5275 | 0.5325 | 0.5240 | 0.1664 | 0.2152 | | 0.3153 | 99.0 | 693 | 1.5750 | 0.5325 | 0.5990 | 2.5269 | 0.5325 | 0.5240 | 0.1678 | 0.2152 | | 0.3153 | 100.0 | 700 | 1.5750 | 0.5325 | 0.5990 | 2.5263 | 0.5325 | 0.5240 | 0.1659 | 0.2152 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1.post200 - Datasets 2.9.0 - Tokenizers 0.13.2
DipanAI/falcon_law_7Ba
DipanAI
2023-07-12T18:01:26Z
0
0
null
[ "tensorboard", "generated_from_trainer", "text-generation", "region:us" ]
text-generation
2023-07-12T16:13:38Z
--- tags: - generated_from_trainer model-index: - name: falcon_law_7Ba results: [] pipeline_tag: text-generation --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # falcon_law_7Ba This model is a fine-tuned version of [ybelkada/falcon-7b-sharded-bf16](https://huggingface.co/ybelkada/falcon-7b-sharded-bf16) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2 - training_steps: 100 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
tyavika/lr1e5-layer1-bs16-Distil-CNN128LSTM128NoBi
tyavika
2023-07-12T17:59:27Z
77
0
transformers
[ "transformers", "pytorch", "distilbert", "question-answering", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-07-12T15:42:47Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: lr1e5-layer1-bs16-Distil-CNN128LSTM128NoBi results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # lr1e5-layer1-bs16-Distil-CNN128LSTM128NoBi This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3813 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.5317 | 1.0 | 3290 | 1.3385 | | 1.0853 | 2.0 | 6580 | 1.1885 | | 0.7993 | 3.0 | 9870 | 1.2330 | | 0.5808 | 4.0 | 13160 | 1.3813 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
Jihyeon-2/lora-trained-xl_lhand
Jihyeon-2
2023-07-12T17:57:08Z
5
1
diffusers
[ "diffusers", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "base_model:stabilityai/stable-diffusion-xl-base-0.9", "base_model:adapter:stabilityai/stable-diffusion-xl-base-0.9", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-07-12T16:34:56Z
--- license: creativeml-openrail-m base_model: stabilityai/stable-diffusion-xl-base-0.9 instance_prompt: a photo of sks hand tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA DreamBooth - Jihyeon-2/lora-trained-xl_lhand These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-0.9. The weights were trained on a photo of sks hand using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png) LoRA for the text encoder was enabled: False.
ayanban011/vit-base_tobacco_bs_16_lr_2e-4_e_200_wr_0.01_wd_0.2
ayanban011
2023-07-12T17:43:52Z
165
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-07-12T15:30:14Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: vit-base_tobacco_bs_16_lr_2e-4_e_200_wr_0.01_wd_0.2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base_tobacco_bs_16_lr_2e-4_e_200_wr_0.01_wd_0.2 This model is a fine-tuned version of [jordyvl/vit-base_tobacco](https://huggingface.co/jordyvl/vit-base_tobacco) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0644 - Accuracy: 0.86 - Brier Loss: 0.2705 - Nll: 1.3085 - F1 Micro: 0.8600 - F1 Macro: 0.8552 - Ece: 0.1378 - Aurc: 0.0461 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.01 - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc | |:-------------:|:------:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:| | No log | 0.96 | 12 | 0.7869 | 0.78 | 0.3223 | 1.3413 | 0.78 | 0.7402 | 0.2269 | 0.0782 | | No log | 2.0 | 25 | 0.9889 | 0.715 | 0.4300 | 1.9648 | 0.715 | 0.6881 | 0.2669 | 0.1397 | | No log | 2.96 | 37 | 0.7053 | 0.82 | 0.2995 | 1.2578 | 0.82 | 0.8270 | 0.2253 | 0.0758 | | No log | 4.0 | 50 | 0.7535 | 0.78 | 0.3225 | 1.3427 | 0.78 | 0.7395 | 0.2159 | 0.0616 | | No log | 4.96 | 62 | 0.8538 | 0.775 | 0.3634 | 1.5684 | 0.775 | 0.7523 | 0.2181 | 0.1149 | | No log | 6.0 | 75 | 0.7825 | 0.77 | 0.3557 | 1.3406 | 0.7700 | 0.7663 | 0.2136 | 0.0625 | | No log | 6.96 | 87 | 1.0777 | 0.67 | 0.4896 | 1.5465 | 0.67 | 0.6728 | 0.2540 | 0.1106 | | No log | 8.0 | 100 | 1.1030 | 0.73 | 0.4453 | 2.5744 | 0.7300 | 0.6939 | 0.2294 | 0.1423 | | No log | 8.96 | 112 | 1.0215 | 0.725 | 0.4339 | 2.0485 | 0.7250 | 0.7012 | 0.2348 | 0.1278 | | No log | 10.0 | 125 | 0.7940 | 0.795 | 0.3378 | 1.3057 | 0.795 | 0.7911 | 0.1828 | 0.0750 | | No log | 10.96 | 137 | 0.7648 | 0.82 | 0.2963 | 1.3907 | 0.82 | 0.8022 | 0.1597 | 0.0744 | | No log | 12.0 | 150 | 1.0755 | 0.74 | 0.4383 | 2.1271 | 0.74 | 0.7182 | 0.2281 | 0.0847 | | No log | 12.96 | 162 | 1.0091 | 0.775 | 0.3856 | 1.7383 | 0.775 | 0.7339 | 0.1969 | 0.1029 | | No log | 14.0 | 175 | 1.0531 | 0.77 | 0.4027 | 1.5532 | 0.7700 | 0.7592 | 0.2152 | 0.0888 | | No log | 14.96 | 187 | 1.0221 | 0.77 | 0.4027 | 1.5199 | 0.7700 | 0.7259 | 0.2059 | 0.1031 | | No log | 16.0 | 200 | 1.1795 | 0.735 | 0.4435 | 2.0739 | 0.735 | 0.7063 | 0.2262 | 0.1305 | | No log | 16.96 | 212 | 1.1560 | 0.745 | 0.4379 | 2.0155 | 0.745 | 0.7240 | 0.2207 | 0.1273 | | No log | 18.0 | 225 | 1.0635 | 0.76 | 0.4159 | 1.5491 | 0.76 | 0.7508 | 0.2124 | 0.0879 | | No log | 18.96 | 237 | 1.2639 | 0.73 | 0.4649 | 1.9828 | 0.7300 | 0.7276 | 0.2298 | 0.1079 | | No log | 20.0 | 250 | 1.0598 | 0.78 | 0.3866 | 1.5139 | 0.78 | 0.7676 | 0.1885 | 0.0914 | | No log | 20.96 | 262 | 0.8900 | 0.81 | 0.3241 | 1.8355 | 0.81 | 0.7925 | 0.1691 | 0.0648 | | No log | 22.0 | 275 | 1.0617 | 0.79 | 0.3788 | 1.8951 | 0.79 | 0.7783 | 0.1893 | 0.0676 | | No log | 22.96 | 287 | 1.0362 | 0.785 | 0.3646 | 1.9399 | 0.785 | 0.7653 | 0.1914 | 0.0816 | | No log | 24.0 | 300 | 1.1701 | 0.775 | 0.4060 | 2.1593 | 0.775 | 0.7718 | 0.2114 | 0.0842 | | No log | 24.96 | 312 | 1.0841 | 0.79 | 0.3799 | 1.8773 | 0.79 | 0.7795 | 0.2016 | 0.0775 | | No log | 26.0 | 325 | 1.0064 | 0.785 | 0.3650 | 1.7371 | 0.785 | 0.7674 | 0.1915 | 0.0813 | | No log | 26.96 | 337 | 0.8886 | 0.825 | 0.3114 | 1.4858 | 0.825 | 0.8116 | 0.1609 | 0.0636 | | No log | 28.0 | 350 | 1.1174 | 0.8 | 0.3751 | 1.9584 | 0.8000 | 0.7869 | 0.1928 | 0.0930 | | No log | 28.96 | 362 | 1.0922 | 0.8 | 0.3672 | 1.8702 | 0.8000 | 0.7673 | 0.1954 | 0.0771 | | No log | 30.0 | 375 | 1.0281 | 0.805 | 0.3506 | 1.6105 | 0.805 | 0.7809 | 0.1773 | 0.0936 | | No log | 30.96 | 387 | 0.9041 | 0.82 | 0.3210 | 1.3323 | 0.82 | 0.8148 | 0.1627 | 0.0651 | | No log | 32.0 | 400 | 1.1018 | 0.79 | 0.3804 | 1.9928 | 0.79 | 0.7962 | 0.1859 | 0.0574 | | No log | 32.96 | 412 | 1.1973 | 0.765 | 0.4156 | 1.4304 | 0.765 | 0.7682 | 0.2147 | 0.0760 | | No log | 34.0 | 425 | 1.0216 | 0.805 | 0.3605 | 1.4476 | 0.805 | 0.7864 | 0.1830 | 0.0633 | | No log | 34.96 | 437 | 1.2356 | 0.755 | 0.4237 | 1.9897 | 0.755 | 0.7350 | 0.2214 | 0.0890 | | No log | 36.0 | 450 | 1.0881 | 0.8 | 0.3757 | 1.3848 | 0.8000 | 0.7810 | 0.1960 | 0.0703 | | No log | 36.96 | 462 | 1.1133 | 0.795 | 0.3687 | 2.0286 | 0.795 | 0.7707 | 0.1790 | 0.0652 | | No log | 38.0 | 475 | 1.1243 | 0.78 | 0.3839 | 1.5683 | 0.78 | 0.7699 | 0.1905 | 0.0704 | | No log | 38.96 | 487 | 1.1351 | 0.785 | 0.3983 | 1.4970 | 0.785 | 0.7647 | 0.1969 | 0.0666 | | 0.0934 | 40.0 | 500 | 1.2551 | 0.775 | 0.4089 | 2.0438 | 0.775 | 0.7688 | 0.2082 | 0.1007 | | 0.0934 | 40.96 | 512 | 1.1739 | 0.775 | 0.4003 | 1.3286 | 0.775 | 0.7654 | 0.2056 | 0.0819 | | 0.0934 | 42.0 | 525 | 1.0007 | 0.83 | 0.3207 | 1.2576 | 0.83 | 0.8345 | 0.1579 | 0.0677 | | 0.0934 | 42.96 | 537 | 1.0509 | 0.805 | 0.3580 | 1.2330 | 0.805 | 0.7933 | 0.1884 | 0.0716 | | 0.0934 | 44.0 | 550 | 1.0830 | 0.805 | 0.3537 | 1.7652 | 0.805 | 0.7871 | 0.1740 | 0.0688 | | 0.0934 | 44.96 | 562 | 0.8544 | 0.83 | 0.2957 | 1.4716 | 0.83 | 0.8039 | 0.1560 | 0.0532 | | 0.0934 | 46.0 | 575 | 1.0803 | 0.815 | 0.3549 | 1.5802 | 0.815 | 0.7951 | 0.1840 | 0.0691 | | 0.0934 | 46.96 | 587 | 0.9441 | 0.815 | 0.3318 | 1.2883 | 0.815 | 0.7924 | 0.1709 | 0.0514 | | 0.0934 | 48.0 | 600 | 0.9007 | 0.845 | 0.2765 | 1.3443 | 0.845 | 0.8353 | 0.1402 | 0.0539 | | 0.0934 | 48.96 | 612 | 0.9601 | 0.84 | 0.2952 | 1.4755 | 0.8400 | 0.8306 | 0.1499 | 0.0565 | | 0.0934 | 50.0 | 625 | 0.9801 | 0.84 | 0.2992 | 1.4646 | 0.8400 | 0.8306 | 0.1529 | 0.0559 | | 0.0934 | 50.96 | 637 | 0.9747 | 0.845 | 0.2950 | 1.4544 | 0.845 | 0.8338 | 0.1526 | 0.0546 | | 0.0934 | 52.0 | 650 | 0.9651 | 0.845 | 0.2895 | 1.4442 | 0.845 | 0.8338 | 0.1469 | 0.0537 | | 0.0934 | 52.96 | 662 | 0.9583 | 0.85 | 0.2848 | 1.4367 | 0.85 | 0.8370 | 0.1465 | 0.0525 | | 0.0934 | 54.0 | 675 | 0.9534 | 0.85 | 0.2805 | 1.4300 | 0.85 | 0.8370 | 0.1455 | 0.0514 | | 0.0934 | 54.96 | 687 | 0.9503 | 0.855 | 0.2776 | 1.4252 | 0.855 | 0.8425 | 0.1408 | 0.0510 | | 0.0934 | 56.0 | 700 | 0.9480 | 0.855 | 0.2754 | 1.4207 | 0.855 | 0.8425 | 0.1407 | 0.0506 | | 0.0934 | 56.96 | 712 | 0.9471 | 0.855 | 0.2739 | 1.4175 | 0.855 | 0.8425 | 0.1442 | 0.0504 | | 0.0934 | 58.0 | 725 | 0.9471 | 0.855 | 0.2729 | 1.4147 | 0.855 | 0.8442 | 0.1435 | 0.0501 | | 0.0934 | 58.96 | 737 | 0.9474 | 0.855 | 0.2720 | 1.4125 | 0.855 | 0.8442 | 0.1432 | 0.0497 | | 0.0934 | 60.0 | 750 | 0.9482 | 0.855 | 0.2713 | 1.4101 | 0.855 | 0.8442 | 0.1420 | 0.0497 | | 0.0934 | 60.96 | 762 | 0.9490 | 0.855 | 0.2708 | 1.4082 | 0.855 | 0.8442 | 0.1421 | 0.0493 | | 0.0934 | 62.0 | 775 | 0.9500 | 0.86 | 0.2703 | 1.4063 | 0.8600 | 0.8534 | 0.1411 | 0.0493 | | 0.0934 | 62.96 | 787 | 0.9512 | 0.86 | 0.2702 | 1.4046 | 0.8600 | 0.8534 | 0.1410 | 0.0492 | | 0.0934 | 64.0 | 800 | 0.9528 | 0.86 | 0.2699 | 1.4032 | 0.8600 | 0.8534 | 0.1408 | 0.0489 | | 0.0934 | 64.96 | 812 | 0.9541 | 0.86 | 0.2697 | 1.3472 | 0.8600 | 0.8534 | 0.1349 | 0.0487 | | 0.0934 | 66.0 | 825 | 0.9558 | 0.86 | 0.2696 | 1.3431 | 0.8600 | 0.8534 | 0.1408 | 0.0487 | | 0.0934 | 66.96 | 837 | 0.9574 | 0.86 | 0.2697 | 1.3403 | 0.8600 | 0.8534 | 0.1405 | 0.0486 | | 0.0934 | 68.0 | 850 | 0.9591 | 0.86 | 0.2698 | 1.3375 | 0.8600 | 0.8534 | 0.1402 | 0.0486 | | 0.0934 | 68.96 | 862 | 0.9605 | 0.86 | 0.2698 | 1.3355 | 0.8600 | 0.8552 | 0.1394 | 0.0486 | | 0.0934 | 70.0 | 875 | 0.9624 | 0.86 | 0.2698 | 1.3338 | 0.8600 | 0.8552 | 0.1394 | 0.0486 | | 0.0934 | 70.96 | 887 | 0.9638 | 0.86 | 0.2700 | 1.3322 | 0.8600 | 0.8552 | 0.1397 | 0.0485 | | 0.0934 | 72.0 | 900 | 0.9657 | 0.86 | 0.2701 | 1.3310 | 0.8600 | 0.8552 | 0.1397 | 0.0485 | | 0.0934 | 72.96 | 912 | 0.9673 | 0.86 | 0.2702 | 1.3299 | 0.8600 | 0.8552 | 0.1397 | 0.0484 | | 0.0934 | 74.0 | 925 | 0.9691 | 0.86 | 0.2703 | 1.3289 | 0.8600 | 0.8552 | 0.1397 | 0.0484 | | 0.0934 | 74.96 | 937 | 0.9708 | 0.86 | 0.2704 | 1.3280 | 0.8600 | 0.8552 | 0.1398 | 0.0485 | | 0.0934 | 76.0 | 950 | 0.9725 | 0.86 | 0.2706 | 1.3271 | 0.8600 | 0.8552 | 0.1398 | 0.0485 | | 0.0934 | 76.96 | 962 | 0.9740 | 0.86 | 0.2707 | 1.3263 | 0.8600 | 0.8552 | 0.1398 | 0.0485 | | 0.0934 | 78.0 | 975 | 0.9757 | 0.86 | 0.2707 | 1.3256 | 0.8600 | 0.8552 | 0.1383 | 0.0485 | | 0.0934 | 78.96 | 987 | 0.9772 | 0.86 | 0.2708 | 1.3248 | 0.8600 | 0.8552 | 0.1357 | 0.0484 | | 0.0038 | 80.0 | 1000 | 0.9789 | 0.86 | 0.2709 | 1.3243 | 0.8600 | 0.8552 | 0.1359 | 0.0485 | | 0.0038 | 80.96 | 1012 | 0.9806 | 0.86 | 0.2710 | 1.3238 | 0.8600 | 0.8552 | 0.1360 | 0.0484 | | 0.0038 | 82.0 | 1025 | 0.9820 | 0.86 | 0.2711 | 1.3232 | 0.8600 | 0.8552 | 0.1361 | 0.0482 | | 0.0038 | 82.96 | 1037 | 0.9837 | 0.86 | 0.2712 | 1.3227 | 0.8600 | 0.8552 | 0.1361 | 0.0481 | | 0.0038 | 84.0 | 1050 | 0.9853 | 0.86 | 0.2713 | 1.3222 | 0.8600 | 0.8552 | 0.1362 | 0.0480 | | 0.0038 | 84.96 | 1062 | 0.9867 | 0.86 | 0.2713 | 1.3216 | 0.8600 | 0.8552 | 0.1363 | 0.0481 | | 0.0038 | 86.0 | 1075 | 0.9883 | 0.86 | 0.2714 | 1.3212 | 0.8600 | 0.8552 | 0.1364 | 0.0479 | | 0.0038 | 86.96 | 1087 | 0.9896 | 0.86 | 0.2714 | 1.3208 | 0.8600 | 0.8552 | 0.1365 | 0.0477 | | 0.0038 | 88.0 | 1100 | 0.9911 | 0.86 | 0.2715 | 1.3203 | 0.8600 | 0.8552 | 0.1366 | 0.0478 | | 0.0038 | 88.96 | 1112 | 0.9925 | 0.86 | 0.2715 | 1.3200 | 0.8600 | 0.8552 | 0.1369 | 0.0478 | | 0.0038 | 90.0 | 1125 | 0.9940 | 0.86 | 0.2715 | 1.3196 | 0.8600 | 0.8552 | 0.1369 | 0.0477 | | 0.0038 | 90.96 | 1137 | 0.9954 | 0.86 | 0.2715 | 1.3194 | 0.8600 | 0.8552 | 0.1369 | 0.0476 | | 0.0038 | 92.0 | 1150 | 0.9968 | 0.86 | 0.2716 | 1.3190 | 0.8600 | 0.8552 | 0.1368 | 0.0476 | | 0.0038 | 92.96 | 1162 | 0.9983 | 0.86 | 0.2716 | 1.3187 | 0.8600 | 0.8552 | 0.1368 | 0.0476 | | 0.0038 | 94.0 | 1175 | 0.9996 | 0.86 | 0.2716 | 1.3184 | 0.8600 | 0.8552 | 0.1394 | 0.0476 | | 0.0038 | 94.96 | 1187 | 1.0009 | 0.86 | 0.2716 | 1.3182 | 0.8600 | 0.8552 | 0.1393 | 0.0475 | | 0.0038 | 96.0 | 1200 | 1.0023 | 0.86 | 0.2717 | 1.3179 | 0.8600 | 0.8552 | 0.1392 | 0.0475 | | 0.0038 | 96.96 | 1212 | 1.0035 | 0.86 | 0.2717 | 1.3176 | 0.8600 | 0.8552 | 0.1391 | 0.0475 | | 0.0038 | 98.0 | 1225 | 1.0049 | 0.86 | 0.2717 | 1.3175 | 0.8600 | 0.8552 | 0.1391 | 0.0474 | | 0.0038 | 98.96 | 1237 | 1.0062 | 0.86 | 0.2717 | 1.3172 | 0.8600 | 0.8552 | 0.1391 | 0.0475 | | 0.0038 | 100.0 | 1250 | 1.0075 | 0.86 | 0.2717 | 1.3169 | 0.8600 | 0.8552 | 0.1367 | 0.0475 | | 0.0038 | 100.96 | 1262 | 1.0087 | 0.86 | 0.2717 | 1.3167 | 0.8600 | 0.8552 | 0.1368 | 0.0475 | | 0.0038 | 102.0 | 1275 | 1.0099 | 0.86 | 0.2717 | 1.3164 | 0.8600 | 0.8552 | 0.1375 | 0.0474 | | 0.0038 | 102.96 | 1287 | 1.0111 | 0.86 | 0.2717 | 1.3162 | 0.8600 | 0.8552 | 0.1376 | 0.0473 | | 0.0038 | 104.0 | 1300 | 1.0122 | 0.86 | 0.2717 | 1.3159 | 0.8600 | 0.8552 | 0.1378 | 0.0471 | | 0.0038 | 104.96 | 1312 | 1.0134 | 0.86 | 0.2716 | 1.3158 | 0.8600 | 0.8552 | 0.1378 | 0.0473 | | 0.0038 | 106.0 | 1325 | 1.0146 | 0.86 | 0.2717 | 1.3155 | 0.8600 | 0.8552 | 0.1379 | 0.0472 | | 0.0038 | 106.96 | 1337 | 1.0158 | 0.86 | 0.2717 | 1.3153 | 0.8600 | 0.8552 | 0.1379 | 0.0471 | | 0.0038 | 108.0 | 1350 | 1.0169 | 0.86 | 0.2716 | 1.3151 | 0.8600 | 0.8552 | 0.1380 | 0.0471 | | 0.0038 | 108.96 | 1362 | 1.0180 | 0.86 | 0.2716 | 1.3149 | 0.8600 | 0.8552 | 0.1381 | 0.0471 | | 0.0038 | 110.0 | 1375 | 1.0191 | 0.86 | 0.2716 | 1.3146 | 0.8600 | 0.8552 | 0.1381 | 0.0471 | | 0.0038 | 110.96 | 1387 | 1.0201 | 0.86 | 0.2716 | 1.3144 | 0.8600 | 0.8552 | 0.1382 | 0.0471 | | 0.0038 | 112.0 | 1400 | 1.0211 | 0.86 | 0.2716 | 1.3142 | 0.8600 | 0.8552 | 0.1382 | 0.0470 | | 0.0038 | 112.96 | 1412 | 1.0222 | 0.86 | 0.2716 | 1.3141 | 0.8600 | 0.8552 | 0.1382 | 0.0471 | | 0.0038 | 114.0 | 1425 | 1.0233 | 0.86 | 0.2715 | 1.3139 | 0.8600 | 0.8552 | 0.1383 | 0.0470 | | 0.0038 | 114.96 | 1437 | 1.0242 | 0.86 | 0.2715 | 1.3138 | 0.8600 | 0.8552 | 0.1383 | 0.0470 | | 0.0038 | 116.0 | 1450 | 1.0253 | 0.86 | 0.2715 | 1.3136 | 0.8600 | 0.8552 | 0.1383 | 0.0469 | | 0.0038 | 116.96 | 1462 | 1.0263 | 0.86 | 0.2715 | 1.3134 | 0.8600 | 0.8552 | 0.1383 | 0.0470 | | 0.0038 | 118.0 | 1475 | 1.0273 | 0.86 | 0.2715 | 1.3133 | 0.8600 | 0.8552 | 0.1384 | 0.0470 | | 0.0038 | 118.96 | 1487 | 1.0282 | 0.86 | 0.2714 | 1.3131 | 0.8600 | 0.8552 | 0.1384 | 0.0468 | | 0.0006 | 120.0 | 1500 | 1.0292 | 0.86 | 0.2714 | 1.3130 | 0.8600 | 0.8552 | 0.1385 | 0.0468 | | 0.0006 | 120.96 | 1512 | 1.0301 | 0.86 | 0.2714 | 1.3128 | 0.8600 | 0.8552 | 0.1385 | 0.0468 | | 0.0006 | 122.0 | 1525 | 1.0311 | 0.86 | 0.2714 | 1.3127 | 0.8600 | 0.8552 | 0.1386 | 0.0467 | | 0.0006 | 122.96 | 1537 | 1.0319 | 0.86 | 0.2714 | 1.3126 | 0.8600 | 0.8552 | 0.1386 | 0.0467 | | 0.0006 | 124.0 | 1550 | 1.0329 | 0.86 | 0.2714 | 1.3124 | 0.8600 | 0.8552 | 0.1387 | 0.0467 | | 0.0006 | 124.96 | 1562 | 1.0337 | 0.86 | 0.2713 | 1.3123 | 0.8600 | 0.8552 | 0.1393 | 0.0467 | | 0.0006 | 126.0 | 1575 | 1.0346 | 0.86 | 0.2713 | 1.3122 | 0.8600 | 0.8552 | 0.1374 | 0.0466 | | 0.0006 | 126.96 | 1587 | 1.0354 | 0.86 | 0.2713 | 1.3120 | 0.8600 | 0.8552 | 0.1375 | 0.0466 | | 0.0006 | 128.0 | 1600 | 1.0363 | 0.86 | 0.2713 | 1.3119 | 0.8600 | 0.8552 | 0.1375 | 0.0467 | | 0.0006 | 128.96 | 1612 | 1.0372 | 0.86 | 0.2713 | 1.3118 | 0.8600 | 0.8552 | 0.1375 | 0.0466 | | 0.0006 | 130.0 | 1625 | 1.0380 | 0.86 | 0.2712 | 1.3117 | 0.8600 | 0.8552 | 0.1375 | 0.0466 | | 0.0006 | 130.96 | 1637 | 1.0388 | 0.86 | 0.2712 | 1.3116 | 0.8600 | 0.8552 | 0.1375 | 0.0467 | | 0.0006 | 132.0 | 1650 | 1.0396 | 0.86 | 0.2712 | 1.3115 | 0.8600 | 0.8552 | 0.1375 | 0.0465 | | 0.0006 | 132.96 | 1662 | 1.0403 | 0.86 | 0.2712 | 1.3113 | 0.8600 | 0.8552 | 0.1375 | 0.0466 | | 0.0006 | 134.0 | 1675 | 1.0411 | 0.86 | 0.2712 | 1.3113 | 0.8600 | 0.8552 | 0.1376 | 0.0466 | | 0.0006 | 134.96 | 1687 | 1.0419 | 0.86 | 0.2711 | 1.3112 | 0.8600 | 0.8552 | 0.1376 | 0.0466 | | 0.0006 | 136.0 | 1700 | 1.0426 | 0.86 | 0.2711 | 1.3111 | 0.8600 | 0.8552 | 0.1376 | 0.0465 | | 0.0006 | 136.96 | 1712 | 1.0433 | 0.86 | 0.2711 | 1.3110 | 0.8600 | 0.8552 | 0.1376 | 0.0465 | | 0.0006 | 138.0 | 1725 | 1.0441 | 0.86 | 0.2711 | 1.3109 | 0.8600 | 0.8552 | 0.1376 | 0.0465 | | 0.0006 | 138.96 | 1737 | 1.0448 | 0.86 | 0.2711 | 1.3108 | 0.8600 | 0.8552 | 0.1376 | 0.0465 | | 0.0006 | 140.0 | 1750 | 1.0455 | 0.86 | 0.2710 | 1.3107 | 0.8600 | 0.8552 | 0.1377 | 0.0465 | | 0.0006 | 140.96 | 1762 | 1.0461 | 0.86 | 0.2710 | 1.3106 | 0.8600 | 0.8552 | 0.1377 | 0.0465 | | 0.0006 | 142.0 | 1775 | 1.0468 | 0.86 | 0.2710 | 1.3106 | 0.8600 | 0.8552 | 0.1377 | 0.0465 | | 0.0006 | 142.96 | 1787 | 1.0474 | 0.86 | 0.2710 | 1.3105 | 0.8600 | 0.8552 | 0.1377 | 0.0465 | | 0.0006 | 144.0 | 1800 | 1.0481 | 0.86 | 0.2710 | 1.3104 | 0.8600 | 0.8552 | 0.1377 | 0.0465 | | 0.0006 | 144.96 | 1812 | 1.0487 | 0.86 | 0.2710 | 1.3103 | 0.8600 | 0.8552 | 0.1378 | 0.0465 | | 0.0006 | 146.0 | 1825 | 1.0494 | 0.86 | 0.2709 | 1.3102 | 0.8600 | 0.8552 | 0.1378 | 0.0465 | | 0.0006 | 146.96 | 1837 | 1.0500 | 0.86 | 0.2709 | 1.3102 | 0.8600 | 0.8552 | 0.1378 | 0.0465 | | 0.0006 | 148.0 | 1850 | 1.0506 | 0.86 | 0.2709 | 1.3101 | 0.8600 | 0.8552 | 0.1378 | 0.0465 | | 0.0006 | 148.96 | 1862 | 1.0511 | 0.86 | 0.2709 | 1.3100 | 0.8600 | 0.8552 | 0.1378 | 0.0464 | | 0.0006 | 150.0 | 1875 | 1.0517 | 0.86 | 0.2709 | 1.3099 | 0.8600 | 0.8552 | 0.1378 | 0.0464 | | 0.0006 | 150.96 | 1887 | 1.0523 | 0.86 | 0.2709 | 1.3099 | 0.8600 | 0.8552 | 0.1378 | 0.0464 | | 0.0006 | 152.0 | 1900 | 1.0529 | 0.86 | 0.2708 | 1.3098 | 0.8600 | 0.8552 | 0.1378 | 0.0464 | | 0.0006 | 152.96 | 1912 | 1.0534 | 0.86 | 0.2708 | 1.3097 | 0.8600 | 0.8552 | 0.1378 | 0.0464 | | 0.0006 | 154.0 | 1925 | 1.0539 | 0.86 | 0.2708 | 1.3096 | 0.8600 | 0.8552 | 0.1378 | 0.0464 | | 0.0006 | 154.96 | 1937 | 1.0544 | 0.86 | 0.2708 | 1.3096 | 0.8600 | 0.8552 | 0.1378 | 0.0464 | | 0.0006 | 156.0 | 1950 | 1.0550 | 0.86 | 0.2708 | 1.3095 | 0.8600 | 0.8552 | 0.1378 | 0.0464 | | 0.0006 | 156.96 | 1962 | 1.0554 | 0.86 | 0.2708 | 1.3094 | 0.8600 | 0.8552 | 0.1378 | 0.0464 | | 0.0006 | 158.0 | 1975 | 1.0559 | 0.86 | 0.2707 | 1.3094 | 0.8600 | 0.8552 | 0.1378 | 0.0464 | | 0.0006 | 158.96 | 1987 | 1.0563 | 0.86 | 0.2707 | 1.3093 | 0.8600 | 0.8552 | 0.1378 | 0.0463 | | 0.0004 | 160.0 | 2000 | 1.0568 | 0.86 | 0.2707 | 1.3093 | 0.8600 | 0.8552 | 0.1378 | 0.0463 | | 0.0004 | 160.96 | 2012 | 1.0573 | 0.86 | 0.2707 | 1.3092 | 0.8600 | 0.8552 | 0.1378 | 0.0463 | | 0.0004 | 162.0 | 2025 | 1.0577 | 0.86 | 0.2707 | 1.3092 | 0.8600 | 0.8552 | 0.1378 | 0.0463 | | 0.0004 | 162.96 | 2037 | 1.0581 | 0.86 | 0.2707 | 1.3091 | 0.8600 | 0.8552 | 0.1378 | 0.0463 | | 0.0004 | 164.0 | 2050 | 1.0585 | 0.86 | 0.2707 | 1.3091 | 0.8600 | 0.8552 | 0.1378 | 0.0463 | | 0.0004 | 164.96 | 2062 | 1.0589 | 0.86 | 0.2707 | 1.3090 | 0.8600 | 0.8552 | 0.1378 | 0.0463 | | 0.0004 | 166.0 | 2075 | 1.0593 | 0.86 | 0.2707 | 1.3090 | 0.8600 | 0.8552 | 0.1378 | 0.0463 | | 0.0004 | 166.96 | 2087 | 1.0597 | 0.86 | 0.2706 | 1.3089 | 0.8600 | 0.8552 | 0.1378 | 0.0463 | | 0.0004 | 168.0 | 2100 | 1.0600 | 0.86 | 0.2706 | 1.3089 | 0.8600 | 0.8552 | 0.1378 | 0.0463 | | 0.0004 | 168.96 | 2112 | 1.0603 | 0.86 | 0.2706 | 1.3089 | 0.8600 | 0.8552 | 0.1378 | 0.0463 | | 0.0004 | 170.0 | 2125 | 1.0607 | 0.86 | 0.2706 | 1.3088 | 0.8600 | 0.8552 | 0.1378 | 0.0463 | | 0.0004 | 170.96 | 2137 | 1.0610 | 0.86 | 0.2706 | 1.3088 | 0.8600 | 0.8552 | 0.1378 | 0.0463 | | 0.0004 | 172.0 | 2150 | 1.0613 | 0.86 | 0.2706 | 1.3088 | 0.8600 | 0.8552 | 0.1378 | 0.0462 | | 0.0004 | 172.96 | 2162 | 1.0616 | 0.86 | 0.2706 | 1.3087 | 0.8600 | 0.8552 | 0.1378 | 0.0462 | | 0.0004 | 174.0 | 2175 | 1.0619 | 0.86 | 0.2706 | 1.3087 | 0.8600 | 0.8552 | 0.1378 | 0.0463 | | 0.0004 | 174.96 | 2187 | 1.0621 | 0.86 | 0.2706 | 1.3087 | 0.8600 | 0.8552 | 0.1378 | 0.0462 | | 0.0004 | 176.0 | 2200 | 1.0624 | 0.86 | 0.2706 | 1.3087 | 0.8600 | 0.8552 | 0.1378 | 0.0462 | | 0.0004 | 176.96 | 2212 | 1.0626 | 0.86 | 0.2706 | 1.3086 | 0.8600 | 0.8552 | 0.1378 | 0.0462 | | 0.0004 | 178.0 | 2225 | 1.0629 | 0.86 | 0.2706 | 1.3086 | 0.8600 | 0.8552 | 0.1378 | 0.0462 | | 0.0004 | 178.96 | 2237 | 1.0630 | 0.86 | 0.2706 | 1.3086 | 0.8600 | 0.8552 | 0.1378 | 0.0462 | | 0.0004 | 180.0 | 2250 | 1.0632 | 0.86 | 0.2706 | 1.3086 | 0.8600 | 0.8552 | 0.1378 | 0.0462 | | 0.0004 | 180.96 | 2262 | 1.0634 | 0.86 | 0.2706 | 1.3086 | 0.8600 | 0.8552 | 0.1378 | 0.0463 | | 0.0004 | 182.0 | 2275 | 1.0636 | 0.86 | 0.2705 | 1.3085 | 0.8600 | 0.8552 | 0.1378 | 0.0462 | | 0.0004 | 182.96 | 2287 | 1.0637 | 0.86 | 0.2705 | 1.3085 | 0.8600 | 0.8552 | 0.1378 | 0.0462 | | 0.0004 | 184.0 | 2300 | 1.0639 | 0.86 | 0.2705 | 1.3085 | 0.8600 | 0.8552 | 0.1378 | 0.0462 | | 0.0004 | 184.96 | 2312 | 1.0640 | 0.86 | 0.2705 | 1.3085 | 0.8600 | 0.8552 | 0.1378 | 0.0462 | | 0.0004 | 186.0 | 2325 | 1.0641 | 0.86 | 0.2705 | 1.3085 | 0.8600 | 0.8552 | 0.1378 | 0.0462 | | 0.0004 | 186.96 | 2337 | 1.0642 | 0.86 | 0.2705 | 1.3085 | 0.8600 | 0.8552 | 0.1378 | 0.0462 | | 0.0004 | 188.0 | 2350 | 1.0643 | 0.86 | 0.2705 | 1.3085 | 0.8600 | 0.8552 | 0.1378 | 0.0461 | | 0.0004 | 188.96 | 2362 | 1.0643 | 0.86 | 0.2705 | 1.3085 | 0.8600 | 0.8552 | 0.1378 | 0.0461 | | 0.0004 | 190.0 | 2375 | 1.0644 | 0.86 | 0.2705 | 1.3085 | 0.8600 | 0.8552 | 0.1378 | 0.0461 | | 0.0004 | 190.96 | 2387 | 1.0644 | 0.86 | 0.2705 | 1.3085 | 0.8600 | 0.8552 | 0.1378 | 0.0461 | | 0.0004 | 192.0 | 2400 | 1.0644 | 0.86 | 0.2705 | 1.3085 | 0.8600 | 0.8552 | 0.1378 | 0.0461 | ### Framework versions - Transformers 4.30.2 - Pytorch 1.13.1 - Datasets 2.13.1 - Tokenizers 0.13.3