modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-23 12:29:03
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
492 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-23 12:24:08
card
stringlengths
11
1.01M
nishshekh/distilbert-base-uncased-finetuned-emotion
nishshekh
2023-07-08T05:11:40Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-08T03:31:12Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.927 - name: F1 type: f1 value: 0.9271664736493986 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. The model is trained in Chapter 2: Text Classification in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/02_classification.ipynb). It achieves the following results on the evaluation set: - Loss: 0.2192 - Accuracy: 0.927 - F1: 0.9272 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8569 | 1.0 | 250 | 0.3386 | 0.894 | 0.8888 | | 0.2639 | 2.0 | 500 | 0.2192 | 0.927 | 0.9272 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.1+cu102 - Datasets 1.13.0 - Tokenizers 0.10.3
arminmrm93/LunarLander-PPO-v2
arminmrm93
2023-07-08T05:01:44Z
0
0
null
[ "tensorboard", "LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
2023-07-08T04:59:05Z
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -168.46 +/- 100.21 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'LunarLander-v2' 'total_timesteps': 50000 'learning_rate': 0.00025 'num_envs': 4 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'arminmrm93/LunarLander-PPO-v2' 'batch_size': 512 'minibatch_size': 128} ```
DragGan/DragGan-Models
DragGan
2023-07-08T04:59:22Z
0
43
null
[ "arxiv:2305.10973", "region:us" ]
null
2023-06-25T23:08:14Z
# Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold More about Models here https://github.com/XingangPan/DragGAN https://arxiv.org/abs/2305.10973 ``` @inproceedings{pan2023draggan, title={Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold}, author={Pan, Xingang and Tewari, Ayush, and Leimk{\"u}hler, Thomas and Liu, Lingjie and Meka, Abhimitra and Theobalt, Christian}, booktitle = {ACM SIGGRAPH 2023 Conference Proceedings}, year={2023} } ```
arminmrm93/ppo-CartPole-v1
arminmrm93
2023-07-08T04:40:08Z
0
0
null
[ "tensorboard", "LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
2023-07-08T03:12:11Z
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -147.57 +/- 73.26 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'env_id': 'LunarLander-v2' 'anneal_lr': True 'clip_vloss': True 'gae': True 'gamma': 0.99 'ent_coeff': 0.01 'val_coeff': 0.5 'max_grad_norm': 0.5 'target_kl': None 'gae_lambda': 0.95 'learning_rate': 0.0001 'seed': 1993 'n_envs': 4 'total_timesteps': 25000 'n_timesteps': 32 'capture_video': False 'norm_adv': True 'clip_coef': 0.2 'mini_batch_size': 4 'epochs': 4} ```
NasimB/gpt2-concat-cbt-rarity-all-5p75k-p55k
NasimB
2023-07-08T04:20:09Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-08T02:28:33Z
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: gpt2-concat-cbt-rarity-all-5p75k-p55k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-concat-cbt-rarity-all-5p75k-p55k This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 3.1884 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 6.7274 | 0.29 | 500 | 5.6366 | | 5.3729 | 0.59 | 1000 | 5.2059 | | 5.0241 | 0.88 | 1500 | 4.9462 | | 4.7442 | 1.17 | 2000 | 4.7999 | | 4.5874 | 1.46 | 2500 | 4.6725 | | 4.4736 | 1.76 | 3000 | 4.5694 | | 4.3472 | 2.05 | 3500 | 4.4901 | | 4.1581 | 2.34 | 4000 | 4.4410 | | 4.1297 | 2.63 | 4500 | 4.3834 | | 4.0798 | 2.93 | 5000 | 4.3289 | | 3.883 | 3.22 | 5500 | 4.3259 | | 3.8233 | 3.51 | 6000 | 4.2937 | | 3.8044 | 3.8 | 6500 | 4.2585 | | 3.7076 | 4.1 | 7000 | 4.2566 | | 3.5409 | 4.39 | 7500 | 4.2515 | | 3.5309 | 4.68 | 8000 | 4.2363 | | 3.5148 | 4.97 | 8500 | 4.2237 | | 3.3604 | 5.27 | 9000 | 4.2357 | | 3.3405 | 5.56 | 9500 | 4.2344 | | 3.3393 | 5.85 | 10000 | 4.2330 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
Bugsys0302/Nanashi-Mumei-LoRA
Bugsys0302
2023-07-08T04:08:05Z
0
1
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-08T04:03:55Z
--- license: creativeml-openrail-m ---
TigerResearch/tigerbot-7b-sft-v1
TigerResearch
2023-07-08T03:48:41Z
203
13
transformers
[ "transformers", "pytorch", "bloom", "text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-05-31T09:16:07Z
--- license: apache-2.0 --- <div style="width: 100%;"> <img src="http://x-pai.algolet.com/bot/img/logo_core.png" alt="TigerBot" style="width: 20%; display: block; margin: auto;"> </div> <p align="center"> <font face="黑体" size=5"> A cutting-edge foundation for your very own LLM. </font> </p> <p align="center"> 🌐 <a href="https://tigerbot.com/" target="_blank">TigerBot</a> • 🤗 <a href="https://huggingface.co/TigerResearch" target="_blank">Hugging Face</a> </p> ## Github https://github.com/TigerResearch/TigerBot ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM from accelerate import infer_auto_device_map, dispatch_model from accelerate.utils import get_balanced_memory tokenizer = AutoTokenizer.from_pretrained("TigerResearch/tigerbot-7b-sft-v1") model = AutoModelForCausalLM.from_pretrained("TigerResearch/tigerbot-7b-sft-v1") max_memory = get_balanced_memory(model) device_map = infer_auto_device_map(model, max_memory=max_memory, no_split_module_classes=["BloomBlock"]) model = dispatch_model(model, device_map=device_map, offload_buffers=True) device = torch.cuda.current_device() tok_ins = "\n\n### Instruction:\n" tok_res = "\n\n### Response:\n" prompt_input = tok_ins + "{instruction}" + tok_res input_text = "What is the next number after this list: [1, 2, 3, 5, 8, 13, 21]" input_text = prompt_input.format_map({'instruction': input_text}) max_input_length = 512 max_generate_length = 1024 generation_kwargs = { "top_p": 0.95, "temperature": 0.8, "max_length": max_generate_length, "eos_token_id": tokenizer.eos_token_id, "pad_token_id": tokenizer.pad_token_id, "early_stopping": True, "no_repeat_ngram_size": 4, } inputs = tokenizer(input_text, return_tensors='pt', truncation=True, max_length=max_input_length) inputs = {k: v.to(device) for k, v in inputs.items()} output = model.generate(**inputs, **generation_kwargs) answer = '' for tok_id in output[0][inputs['input_ids'].shape[1]:]: if tok_id != tokenizer.eos_token_id: answer += tokenizer.decode(tok_id) print(answer) ```
liyingjian/q-FrozenLake-v1-4x4-noSlippery
liyingjian
2023-07-08T03:38:02Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-08T03:37:59Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="liyingjian/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Bugsys0302/headback-lora
Bugsys0302
2023-07-08T03:33:13Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-08T03:09:39Z
--- license: creativeml-openrail-m ---
chandrasutrisnotjhong/ppo-LunarLander-v2
chandrasutrisnotjhong
2023-07-08T03:31:34Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-08T02:46:20Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 261.38 +/- 22.03 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Ellbendls/a2c-AntBulletEnv-v0
Ellbendls
2023-07-08T03:25:40Z
1
0
stable-baselines3
[ "stable-baselines3", "AntBulletEnv-v0", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-08T03:24:38Z
--- library_name: stable-baselines3 tags: - AntBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: AntBulletEnv-v0 type: AntBulletEnv-v0 metrics: - type: mean_reward value: 1172.68 +/- 114.70 name: mean_reward verified: false --- # **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Shridipta-06/a2c-PandaReachDense-v23
Shridipta-06
2023-07-08T03:19:28Z
1
0
stable-baselines3
[ "stable-baselines3", "PandaReachDense-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-08T03:16:44Z
--- library_name: stable-baselines3 tags: - PandaReachDense-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v2 type: PandaReachDense-v2 metrics: - type: mean_reward value: -4.41 +/- 1.16 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v2** This is a trained model of a **A2C** agent playing **PandaReachDense-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
hegelty/kcbert-base-finetuned-josa
hegelty
2023-07-08T03:05:52Z
78
0
transformers
[ "transformers", "tf", "bert", "fill-mask", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-07-08T02:35:39Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: hegelty/kcbert-base-finetuned-josa results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # hegelty/kcbert-base-finetuned-josa This model is a fine-tuned version of [beomi/kcbert-base](https://huggingface.co/beomi/kcbert-base) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 3.3074 - Validation Loss: 3.0620 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 5104, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 3.3074 | 3.0620 | 0 | ### Framework versions - Transformers 4.30.2 - TensorFlow 2.9.2 - Datasets 2.13.1 - Tokenizers 0.13.3
SarielSinLuo/bert-large-uncased-finetuned-rte
SarielSinLuo
2023-07-08T02:51:59Z
106
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-07T16:46:02Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: bert-large-uncased-finetuned-rte results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue config: rte split: validation args: rte metrics: - name: Accuracy type: accuracy value: 0.7545126353790613 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-large-uncased-finetuned-rte This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.7653 - Accuracy: 0.7545 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 312 | 0.6211 | 0.6643 | | 0.6082 | 2.0 | 624 | 0.6857 | 0.6643 | | 0.6082 | 3.0 | 936 | 0.7653 | 0.7545 | | 0.3509 | 4.0 | 1248 | 1.2287 | 0.7401 | | 0.153 | 5.0 | 1560 | 1.5192 | 0.7437 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
BillBojangeles2000/gpt_neo-Dialog-FLOW
BillBojangeles2000
2023-07-08T02:23:36Z
0
0
peft
[ "peft", "text-generation", "license:creativeml-openrail-m", "region:us" ]
text-generation
2023-06-11T22:08:46Z
--- library_name: peft license: creativeml-openrail-m pipeline_tag: text-generation ---
aroot/eng-mya-r1
aroot
2023-07-08T02:09:47Z
106
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-08T01:50:28Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-mya-r1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-mya-r1 This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8954 - Bleu: 3.9641 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
aroot/eng-fra-r3
aroot
2023-07-08T01:43:03Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-08T01:24:48Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-fra-r3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-fra-r3 This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1427 - Bleu: 32.4279 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
saipawan02/results
saipawan02
2023-07-08T01:41:27Z
0
0
null
[ "generated_from_trainer", "license:apache-2.0", "region:us" ]
null
2023-07-07T12:17:45Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [vilsonrodrigues/falcon-7b-instruct-sharded](https://huggingface.co/vilsonrodrigues/falcon-7b-instruct-sharded) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - training_steps: 100 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
saintzeno/reinforce-Pixelcopter-PLE-v0
saintzeno
2023-07-08T01:28:52Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-07-04T05:51:14Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: reinforce-Pixelcopter-PLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 42.70 +/- 25.08 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
RajkNakka/poca-SoccerTwos
RajkNakka
2023-07-08T01:25:52Z
7
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SoccerTwos", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos", "region:us" ]
reinforcement-learning
2023-07-08T01:24:49Z
--- library_name: ml-agents tags: - SoccerTwos - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: RajkNakka/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
aroot/eng-mya-simcse_random_usrl
aroot
2023-07-08T01:10:25Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-08T00:49:19Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-mya-simcse_random_usrl results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-mya-simcse_random_usrl This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8870 - Bleu: 4.2308 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
aroot/eng-fra-r2
aroot
2023-07-08T00:59:57Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-08T00:41:18Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-fra-r2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-fra-r2 This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1498 - Bleu: 31.6893 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
WALIDALI/rawaawly
WALIDALI
2023-07-08T00:59:35Z
6
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-08T00:47:05Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### Rawaawly Dreambooth model trained by WALIDALI with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
squeeze-ai-lab/sq-opt-30b-w3-s50
squeeze-ai-lab
2023-07-08T00:51:17Z
0
0
null
[ "arxiv:2306.07629", "arxiv:2205.01068", "region:us" ]
null
2023-07-07T05:53:05Z
**SqueezeLLM** is a post-training quantization framework that incorporates a new method called Dense-and-Sparse Quantization to enable efficient LLM serving. **TLDR:** Deploying LLMs is difficult due to their large memory size. This can be addressed with reduced precision quantization. But a naive method hurts performance. We address this with a new Dense-and-Sparse Quantization method. Dense-and-Sparse splits weight matrices into two components: A dense component that can be heavily quantized without affecting model performance, as well as a sparse part that preserves sensitive and outlier parts of the weight matrices With this approach, we are able to serve larger models with smaller memory footprint, the same latency, and yet higher accuracy and quality. For more details please check out our [paper](https://arxiv.org/pdf/2306.07629.pdf). ## Model description 3-bit quantized OPT 30B model using SqueezeLLM. More details can be found in the [paper](https://arxiv.org/pdf/2306.07629.pdf). * **Base Model:** [OPT 30B](https://arxiv.org/abs/2205.01068) * **Bitwidth:** 3-bit * **Sparsity Level:** 0.5% ## Links * **Paper**: [https://arxiv.org/pdf/2306.07629.pdf](https://arxiv.org/pdf/2306.07629.pdf) * **Code**: [https://github.com/SqueezeAILab/SqueezeLLM](https://github.com/SqueezeAILab/SqueezeLLM) --- license: other ---
LanzerPotaz/Dumb_Huggy_3.0
LanzerPotaz
2023-07-08T00:45:06Z
3
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-07-08T00:45:02Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: LanzerPotaz/Dumb_Huggy_3.0 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
DIOS9/taxi
DIOS9
2023-07-08T00:40:53Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-08T00:40:50Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: taxi results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.52 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="DIOS9/taxi", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
ALM-AHME/swinv2-large-patch4-window12to16-192to256-22kto1k-ft-finetuned-Lesion-Classification-HAM10000-AH
ALM-AHME
2023-07-08T00:39:40Z
15
1
transformers
[ "transformers", "pytorch", "tensorboard", "swinv2", "image-classification", "generated_from_trainer", "dataset:imagefolder", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-07-07T17:41:18Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: swinv2-large-patch4-window12to16-192to256-22kto1k-ft-finetuned-Lesion-Classification-HAM10000-AH results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: Augmented-Final split: train args: Augmented-Final metrics: - name: Accuracy type: accuracy value: 0.9681397738951696 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swinv2-large-patch4-window12to16-192to256-22kto1k-ft-finetuned-Lesion-Classification-HAM10000-AH This model is a fine-tuned version of [microsoft/swinv2-large-patch4-window12to16-192to256-22kto1k-ft](https://huggingface.co/microsoft/swinv2-large-patch4-window12to16-192to256-22kto1k-ft) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.1143 - Accuracy: 0.9681 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.9 - num_epochs: 12 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.9527 | 1.0 | 122 | 1.9746 | 0.1716 | | 1.818 | 2.0 | 244 | 1.7423 | 0.3628 | | 1.5044 | 3.0 | 366 | 1.3707 | 0.5046 | | 1.1173 | 4.0 | 488 | 0.9796 | 0.6300 | | 0.8714 | 5.0 | 610 | 0.7475 | 0.7379 | | 0.8631 | 6.0 | 732 | 0.5978 | 0.7729 | | 0.628 | 7.0 | 854 | 0.4791 | 0.8212 | | 0.5588 | 8.0 | 976 | 0.3517 | 0.8705 | | 0.5632 | 9.0 | 1098 | 0.2564 | 0.9168 | | 0.3693 | 10.0 | 1220 | 0.1875 | 0.9455 | | 0.321 | 11.0 | 1342 | 0.1525 | 0.9424 | | 0.2761 | 12.0 | 1464 | 0.1143 | 0.9681 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
aroot/eng-guj-simcse_random_usrl
aroot
2023-07-08T00:29:59Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-08T00:08:29Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-guj-simcse_random_usrl results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-guj-simcse_random_usrl This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.2803 - Bleu: 2.8935 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
aroot/eng-guj-simcse_central_usrl
aroot
2023-07-08T00:25:52Z
110
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-08T00:04:17Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-guj-simcse_central_usrl results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-guj-simcse_central_usrl This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.2765 - Bleu: 2.8046 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
aroot/eng-mya-simcse_random_ssrl
aroot
2023-07-08T00:19:13Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-07T23:58:19Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-mya-simcse_random_ssrl results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-mya-simcse_random_ssrl This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8898 - Bleu: 4.1148 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
aroot/eng-guj-simcse_central_ssrl
aroot
2023-07-07T23:42:40Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-07T23:24:29Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-guj-simcse_central_ssrl results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-guj-simcse_central_ssrl This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.2825 - Bleu: 2.5968 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
LarryAIDraw/Ruby
LarryAIDraw
2023-07-07T23:34:28Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-07T23:32:32Z
--- license: creativeml-openrail-m --- https://civitai.com/models/102477/hoshino-ruby-or-oshi-no-ko
NasimB/gpt2-concat-cbt-rarity-all-12k-p8k
NasimB
2023-07-07T23:13:28Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-06T19:43:42Z
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: gpt2-concat-cbt-rarity-all-12k-p8k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-concat-cbt-rarity-all-12k-p8k This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 3.1831 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 6.7231 | 0.3 | 500 | 5.6302 | | 5.3719 | 0.59 | 1000 | 5.2055 | | 5.0197 | 0.89 | 1500 | 4.9471 | | 4.7464 | 1.18 | 2000 | 4.7963 | | 4.5886 | 1.48 | 2500 | 4.6741 | | 4.4756 | 1.78 | 3000 | 4.5652 | | 4.3281 | 2.07 | 3500 | 4.4954 | | 4.1589 | 2.37 | 4000 | 4.4441 | | 4.1235 | 2.66 | 4500 | 4.3829 | | 4.0853 | 2.96 | 5000 | 4.3318 | | 3.8541 | 3.25 | 5500 | 4.3268 | | 3.8262 | 3.55 | 6000 | 4.2947 | | 3.8025 | 3.85 | 6500 | 4.2636 | | 3.6668 | 4.14 | 7000 | 4.2639 | | 3.532 | 4.44 | 7500 | 4.2560 | | 3.5262 | 4.73 | 8000 | 4.2409 | | 3.4988 | 5.03 | 8500 | 4.2379 | | 3.3407 | 5.33 | 9000 | 4.2431 | | 3.337 | 5.62 | 9500 | 4.2420 | | 3.3339 | 5.92 | 10000 | 4.2411 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
aroot/eng-fra-simcse_random_ssrl
aroot
2023-07-07T23:06:31Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-07T22:51:26Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-fra-simcse_random_ssrl results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-fra-simcse_random_ssrl This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1462 - Bleu: 31.7089 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
ytung/ql-Taxi-v3
ytung
2023-07-07T23:04:24Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-07T23:04:23Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: ql-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.54 +/- 2.73 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="ytung/ql-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
ytung/q-FrozenLake-v1-4x4-noSlippery
ytung
2023-07-07T23:02:23Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-06-20T22:52:20Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="ytung/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
Erfan2001/Final_PersianTextClassificationModel
Erfan2001
2023-07-07T22:58:50Z
65
0
transformers
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-07T22:48:52Z
--- tags: - generated_from_keras_callback model-index: - name: my-awesome-model results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # my-awesome-model This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.30.2 - TensorFlow 2.12.0 - Tokenizers 0.13.3
HeshamMamdouh/arabart-finetune-sum-v7-fine-tuned
HeshamMamdouh
2023-07-07T22:50:36Z
61
0
transformers
[ "transformers", "tf", "mbart", "text2text-generation", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-07-07T22:50:16Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: arabart-finetune-sum-v7-fine-tuned results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # arabart-finetune-sum-v7-fine-tuned This model is a fine-tuned version of [abdalrahmanshahrour/AraBART-summ](https://huggingface.co/abdalrahmanshahrour/AraBART-summ) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.0272 - Validation Loss: 1.9599 - Train Lr: 3e-05 - Epoch: 6 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 3e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Lr | Epoch | |:----------:|:---------------:|:--------:|:-----:| | 2.5255 | 1.9534 | 3e-05 | 0 | | 2.4124 | 1.9075 | 3e-05 | 1 | | 2.3131 | 1.9357 | 3e-05 | 2 | | 2.2068 | 1.9350 | 3e-05 | 3 | | 2.1723 | 1.9369 | 3e-05 | 4 | | 2.1073 | 1.9452 | 3e-05 | 5 | | 2.0272 | 1.9599 | 3e-05 | 6 | ### Framework versions - Transformers 4.30.2 - TensorFlow 2.13.0 - Datasets 2.13.1 - Tokenizers 0.13.3
trevorj/q-FrozenLake-v1-4x4-noSlippery
trevorj
2023-07-07T22:48:01Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-07T22:47:59Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="trevorj/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
HaziqRazali/ppo-LunarLander-v2
HaziqRazali
2023-07-07T22:47:13Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-07T22:46:53Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: ppo results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 242.01 +/- 20.00 name: mean_reward verified: false --- # **ppo** Agent playing **LunarLander-v2** This is a trained model of a **ppo** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
NasimB/gpt2-concat-cbt-rarity-all-no-cbt-7k-p8k
NasimB
2023-07-07T22:29:05Z
8
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-07T19:38:15Z
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: gpt2-concat-cbt-rarity-all-no-cbt-7k-p8k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-concat-cbt-rarity-all-no-cbt-7k-p8k This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 3.1829 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 6.7276 | 0.29 | 500 | 5.6334 | | 5.3681 | 0.59 | 1000 | 5.2101 | | 5.0257 | 0.88 | 1500 | 4.9484 | | 4.7492 | 1.17 | 2000 | 4.7957 | | 4.586 | 1.47 | 2500 | 4.6672 | | 4.476 | 1.76 | 3000 | 4.5561 | | 4.3388 | 2.05 | 3500 | 4.4815 | | 4.1528 | 2.35 | 4000 | 4.4321 | | 4.1203 | 2.64 | 4500 | 4.3692 | | 4.08 | 2.93 | 5000 | 4.3225 | | 3.8694 | 3.23 | 5500 | 4.3200 | | 3.8133 | 3.52 | 6000 | 4.2856 | | 3.8082 | 3.81 | 6500 | 4.2543 | | 3.6896 | 4.11 | 7000 | 4.2513 | | 3.5344 | 4.4 | 7500 | 4.2450 | | 3.5282 | 4.69 | 8000 | 4.2309 | | 3.5178 | 4.99 | 8500 | 4.2181 | | 3.346 | 5.28 | 9000 | 4.2297 | | 3.3387 | 5.57 | 9500 | 4.2294 | | 3.3317 | 5.87 | 10000 | 4.2282 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
Khushnur/t5-base-end2end-questions-generation_eli_squad_single_exp
Khushnur
2023-07-07T22:17:13Z
164
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-07-07T20:33:49Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: t5-base-end2end-questions-generation_eli_squad_single_exp results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-base-end2end-questions-generation_eli_squad_single_exp This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.7241 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.4297 | 0.25 | 100 | 2.7250 | | 2.2459 | 0.5 | 200 | 2.7337 | | 2.2066 | 0.74 | 300 | 2.7301 | | 2.1867 | 0.99 | 400 | 2.7186 | | 2.1046 | 1.24 | 500 | 2.7268 | | 2.1003 | 1.49 | 600 | 2.7269 | | 2.0799 | 1.74 | 700 | 2.7222 | | 2.0852 | 1.99 | 800 | 2.7238 | | 2.0323 | 2.23 | 900 | 2.7258 | | 2.0297 | 2.48 | 1000 | 2.7252 | | 2.0451 | 2.73 | 1100 | 2.7230 | | 2.0208 | 2.98 | 1200 | 2.7241 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
neilsun2009/amz_movie_tv_distilgpt2_5k_longest
neilsun2009
2023-07-07T22:16:36Z
4
0
peft
[ "peft", "gpt-2", "text-generation", "en", "region:us" ]
text-generation
2023-07-07T22:12:49Z
--- language: - en metrics: - perplexity library_name: peft pipeline_tag: text-generation tags: - gpt-2 ---
TheBloke/bloomz-176B-GPTQ
TheBloke
2023-07-07T22:03:59Z
14
20
transformers
[ "transformers", "bloom", "text-generation", "ak", "ar", "as", "bm", "bn", "ca", "code", "en", "es", "eu", "fon", "fr", "gu", "hi", "id", "ig", "ki", "kn", "lg", "ln", "ml", "mr", "ne", "nso", "ny", "or", "pa", "pt", "rn", "rw", "sn", "st", "sw", "ta", "te", "tn", "ts", "tum", "tw", "ur", "vi", "wo", "xh", "yo", "zh", "zu", "dataset:bigscience/xP3", "arxiv:2211.01786", "license:bigscience-bloom-rail-1.0", "model-index", "autotrain_compatible", "region:us" ]
text-generation
2023-07-05T12:49:41Z
--- datasets: - bigscience/xP3 license: bigscience-bloom-rail-1.0 language: - ak - ar - as - bm - bn - ca - code - en - es - eu - fon - fr - gu - hi - id - ig - ki - kn - lg - ln - ml - mr - ne - nso - ny - or - pa - pt - rn - rw - sn - st - sw - ta - te - tn - ts - tum - tw - ur - vi - wo - xh - yo - zh - zu programming_language: - C - C++ - C# - Go - Java - JavaScript - Lua - PHP - Python - Ruby - Rust - Scala - TypeScript pipeline_tag: text-generation inference: false widget: - text: "一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。Would you rate the previous review as positive, neutral or negative?" example_title: "zh-en sentiment" - text: "一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。你认为这句话的立场是赞扬、中立还是批评?" example_title: "zh-zh sentiment" - text: "Suggest at least five related search terms to \"Mạng neural nhân tạo\"." example_title: "vi-en query" - text: "Proposez au moins cinq mots clés concernant «Réseau de neurones artificiels»." example_title: "fr-fr query" - text: "Explain in a sentence in Telugu what is backpropagation in neural networks." example_title: "te-en qa" - text: "Why is the sky blue?" example_title: "en-en qa" - text: "Explain to me in Traditional Chinese what is the difference between Bitcoin and Ethereum." example_title: "zh-en qa" - text: "Write a code snippet with O(log(n)) computational complexity." example_title: "code-en" - text: "Write a fairy tale about a troll saving a princess from a dangerous dragon. The fairy tale is a masterpiece that has achieved praise worldwide and its moral is \"Heroes Come in All Shapes and Sizes\". Story (in Spanish):" example_title: "es-en fable" - text: "Write a fable about wood elves living in a forest that is suddenly invaded by ogres. The fable is a masterpiece that has achieved praise worldwide and its moral is \"Violence is the last refuge of the incompetent\". Fable (in Hindi):" example_title: "hi-en fable" - text: "How many sides does a rectangle and heptagon have, when combined? Answer this question with some math. Ein Rechteck hat 4 Seiten. Ein Siebeneck hat 7 Seiten. In Kombination haben sie 4 + 7 = 11 Seiten. كم عدد الأضلاع التي يجمعها المربع والمثلث؟ Répondez à cette question en chinois." example_title: "en-de-ar-fr-zh math" model-index: - name: bloomz results: - task: type: Coreference resolution dataset: type: winogrande name: Winogrande XL (xl) config: xl split: validation revision: a80f460359d1e9a67c006011c94de42a8759430c metrics: - type: Accuracy value: 59.27 - task: type: Coreference resolution dataset: type: Muennighoff/xwinograd name: XWinograd (en) config: en split: test revision: 9dd5ea5505fad86b7bedad667955577815300cee metrics: - type: Accuracy value: 69.08 - task: type: Coreference resolution dataset: type: Muennighoff/xwinograd name: XWinograd (fr) config: fr split: test revision: 9dd5ea5505fad86b7bedad667955577815300cee metrics: - type: Accuracy value: 68.67 - task: type: Coreference resolution dataset: type: Muennighoff/xwinograd name: XWinograd (jp) config: jp split: test revision: 9dd5ea5505fad86b7bedad667955577815300cee metrics: - type: Accuracy value: 59.65 - task: type: Coreference resolution dataset: type: Muennighoff/xwinograd name: XWinograd (pt) config: pt split: test revision: 9dd5ea5505fad86b7bedad667955577815300cee metrics: - type: Accuracy value: 64.26 - task: type: Coreference resolution dataset: type: Muennighoff/xwinograd name: XWinograd (ru) config: ru split: test revision: 9dd5ea5505fad86b7bedad667955577815300cee metrics: - type: Accuracy value: 60.95 - task: type: Coreference resolution dataset: type: Muennighoff/xwinograd name: XWinograd (zh) config: zh split: test revision: 9dd5ea5505fad86b7bedad667955577815300cee metrics: - type: Accuracy value: 70.24 - task: type: Natural language inference dataset: type: anli name: ANLI (r1) config: r1 split: validation revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094 metrics: - type: Accuracy value: 48.6 - task: type: Natural language inference dataset: type: anli name: ANLI (r2) config: r2 split: validation revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094 metrics: - type: Accuracy value: 44.1 - task: type: Natural language inference dataset: type: anli name: ANLI (r3) config: r3 split: validation revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094 metrics: - type: Accuracy value: 45.5 - task: type: Natural language inference dataset: type: super_glue name: SuperGLUE (cb) config: cb split: validation revision: 9e12063561e7e6c79099feb6d5a493142584e9e2 metrics: - type: Accuracy value: 82.14 - task: type: Natural language inference dataset: type: super_glue name: SuperGLUE (rte) config: rte split: validation revision: 9e12063561e7e6c79099feb6d5a493142584e9e2 metrics: - type: Accuracy value: 85.56 - task: type: Natural language inference dataset: type: xnli name: XNLI (ar) config: ar split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 60.68 - task: type: Natural language inference dataset: type: xnli name: XNLI (bg) config: bg split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 48.43 - task: type: Natural language inference dataset: type: xnli name: XNLI (de) config: de split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 54.38 - task: type: Natural language inference dataset: type: xnli name: XNLI (el) config: el split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 47.43 - task: type: Natural language inference dataset: type: xnli name: XNLI (en) config: en split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 67.47 - task: type: Natural language inference dataset: type: xnli name: XNLI (es) config: es split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 61.24 - task: type: Natural language inference dataset: type: xnli name: XNLI (fr) config: fr split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 61.37 - task: type: Natural language inference dataset: type: xnli name: XNLI (hi) config: hi split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 60.2 - task: type: Natural language inference dataset: type: xnli name: XNLI (ru) config: ru split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 54.02 - task: type: Natural language inference dataset: type: xnli name: XNLI (sw) config: sw split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 52.09 - task: type: Natural language inference dataset: type: xnli name: XNLI (th) config: th split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 43.78 - task: type: Natural language inference dataset: type: xnli name: XNLI (tr) config: tr split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 45.7 - task: type: Natural language inference dataset: type: xnli name: XNLI (ur) config: ur split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 50.8 - task: type: Natural language inference dataset: type: xnli name: XNLI (vi) config: vi split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 61.0 - task: type: Natural language inference dataset: type: xnli name: XNLI (zh) config: zh split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 56.91 - task: type: Program synthesis dataset: type: openai_humaneval name: HumanEval config: None split: test revision: e8dc562f5de170c54b5481011dd9f4fa04845771 metrics: - type: Pass@1 value: 12.06 - type: Pass@10 value: 26.53 - type: Pass@100 value: 48.44 - task: type: Sentence completion dataset: type: story_cloze name: StoryCloze (2016) config: "2016" split: validation revision: e724c6f8cdf7c7a2fb229d862226e15b023ee4db metrics: - type: Accuracy value: 96.26 - task: type: Sentence completion dataset: type: super_glue name: SuperGLUE (copa) config: copa split: validation revision: 9e12063561e7e6c79099feb6d5a493142584e9e2 metrics: - type: Accuracy value: 91.0 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (et) config: et split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 51.0 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (ht) config: ht split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 58.0 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (id) config: id split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 86.0 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (it) config: it split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 74.0 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (qu) config: qu split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 56.0 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (sw) config: sw split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 64.0 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (ta) config: ta split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 69.0 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (th) config: th split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 58.0 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (tr) config: tr split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 57.0 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (vi) config: vi split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 87.0 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (zh) config: zh split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 90.0 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (ar) config: ar split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 92.79 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (es) config: es split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 94.37 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (eu) config: eu split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 86.9 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (hi) config: hi split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 88.42 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (id) config: id split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 92.12 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (my) config: my split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 52.35 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (ru) config: ru split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 81.73 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (sw) config: sw split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 79.81 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (te) config: te split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 81.2 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (zh) config: zh split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 93.12 --- <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # BigScience's BLOOMZ 176B GPTQ These files are GPTQ 4bit model files for [BigScience's BLOOMZ](https://huggingface.co/bigscience/bloomz). It is the result of quantising to 4bit using [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ). **This is a BIG model! 2 x 80GB or 3 x 48GB GPUs are required** ## Important note: files must be joined before use It is not currently possible to shard GPTQ files, therefore the model file is one single 94 GB `safetensors` file. Huggingface Hub has a 50GB per-file limit. I have therefore been forced to split the file in to three parts for upload. I did this using the simple *nix command `split`. To join the files on any *nix system, run: ``` cat gptq_model-4bit--1g.JOINBEFOREUSE.split-*.safetensors > gptq_model-4bit--1g.safetensors ``` To join the files on Windows, open a Command Prompt and run: ``` COPY /B gptq_model-4bit--1g.JOINBEFOREUSE.split-a.safetensors + gptq_model-4bit--1g.JOINBEFOREUSE.split-b.safetensors + gptq_model-4bit--1g.JOINBEFOREUSE.split-c.safetensors gptq_model-4bit--1g.safetensors ``` Or for Python code for joining the files, see the Python section below. The SHA256SUM of the joined file will be: ``` 50baeab9859362d22df6f822f158b9ba75b44ffc6605b715992fe6245aa6e93a gptq_model-4bit--1g.safetensors ``` Once you have the joined file, you can safely delete `gptq_model-4bit--1g.JOINBEFOREUSE.split-*.safetensors`. ## Repositories available * [4-bit GPTQ model for GPU inference](https://huggingface.co/TheBlokeAI/bloomz-175B-GPTQ) * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/bigscience/bloomz) ## Two files provided - separate branches - Main branch: `gptq_model-4bit--1g.safetensors` - Group Size = None - Desc Act (act-order) = True - This version will use the least possible VRAM, and should have higher inference performance in CUDA mode - Branch `group_size_128g`: `gptq_model-4bit-128g.safetensors` - Group Size = 128g - Desc Act (act-oder) = True - This version will use more VRAM, which shouldn't be a problem as it shouldn't exceed 2 x 80GB or 3 x 48GB cards. - However CUDA inference performance is likely to be a lot slower, possibly necessitating the use of Triton mode. By default you will download the first file, unless you choose to download from branch `group_size_128g`. ## Prompt template: none ``` Translate to English: Je t’aime. Translation: ``` ## How to easily download and use this model in text-generation-webui Please make sure you're using the latest version of text-generation-webui. Note 1: this is a non-Llama model which cannot be used with ExLlama. Use Loader: AutoGPTQ. Note 2: As described above, you must join the files after downloading and before loading in text-generation-webui. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/bloomz-176B-GPTQ`. - If you would rather download the group_size 128g version, enter `TheBloke/bloomz-176B-GPTQ:group_size_128g` 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". This is a huge model so it may take a while! 5. Now follow the steps described above to join the model to get a single `.safetensors` file. 6. Untick **Autoload model** 7. In the top left, click the refresh icon next to **Model**. 8. In the **Model** dropdown, choose the model you just downloaded: `bloomz-176B-GPTQ` 9. Make sure Loader is set to AutGPTQ. 10. This model cannot load on one GPU, so you should set **GPU Memory** accordingly. - If using two 80GB GPUs, try: GPU0 = 60GB, GPU1 = 79GB - If using three 48GB GPUs, try: GPU0 = 30GB, GPU1 = 47GB, GPU2 = 47GB 11. Click **Save settings** to save your settings, and then **Reload** to load the model. 12. The model will load, and is now ready for use! 13. Once you're ready, click the **Text Generation tab** and enter a prompt to get started! ## How to use this GPTQ model from Python code First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) installed: `GITHUB_ACTIONS=true pip install auto-gptq` Because this model has to be joined locally, you must first download it. Example download code: ```python from huggingface_hub import snapshot_download snapshot_download(repo_id="TheBloke/bloomz-176B-GPTQ", local_dir="/workspace/models/bloomz-176GB-GPTQ", local_dir_use_symlinks=False) ``` If you want to download the group_size 128g file instead, add `revision="group_size_128g"` to the above command. Now join the three `split` files, which can be done with the following Python code: ```python import glob # Get the list of all files matching the pattern files = sorted(glob.glob('gptq_model-4bit--1g.JOINBEFOREUSE.split-*.safetensors')) # Open the output file in binary write mode with open('gptq_model-4bit--1g.safetensors', 'wb') as outfile: for filename in files: with open(filename, 'rb') as infile: outfile.write(infile.read()) ``` Then try the following example code: ```python from transformers import AutoTokenizer, pipeline, logging from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig import argparse # Use the local path you downloaded the model to and joined the split files in model_name_or_path = "/workspace/models/bloomz-176GB-GPTQ" model_basename = "gptq_model-4bit--1g" use_triton = False tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) model = AutoGPTQForCausalLM.from_quantized(model_name_or_path, model_basename=model_basename, max_memory={0: '60GiB', 1: '79GiB'} # max_memory is for 2 x 80GB GPUs; adjust if your config is different! use_safetensors=True, trust_remote_code=False, use_triton=use_triton, quantize_config=None) prompt = "Translate this to French: AI is the future of computing" prompt_template=f'''{prompt} Translation: ''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline # Prevent printing spurious transformers error when using pipeline with AutoGPTQ logging.set_verbosity(logging.CRITICAL) print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, temperature=0.7, top_p=0.95, repetition_penalty=1.15 ) print(pipe(prompt_template)[0]['generated_text']) ``` ## Provided files ## Main branch: **gptq_model-4bit--1g.safetensors** This will work with AutoGPTQ. It is untested with GPTQ-for-LLaMa. It will *not* work with ExLlama. It was created with group_size none (-1) to reduce VRAM usage, and with --act-order (desc_act) to improve accuracy of responses. * `gptq_model-4bit-128g.safetensors` * Works with AutoGPTQ in CUDA or Triton modes. * Does NOT work with [ExLlama](https://github.com/turboderp/exllama) as it's not a Llama model. * Untested with GPTQ-for-LLaMa. * Works with text-generation-webui, including one-click-installers. * Parameters: Groupsize = -1. Act Order / desc_act = True. ## Branch `group_size_128g` **gptq_model-4bit-128g.safetensors** This will work with AutoGPTQ. It is untested with GPTQ-for-LLaMa. It will *not* work with ExLlama. It was created with both group_size 128g and --act-order (desc_act) for even higher inference accuracy, at the cost of increased VRAM usage. Because we already need 2 x 80GB or 3 x 48GB GPUs, I don't expect the increased VRAM usage to change the GPU requirements. **Note** Using group_size + desc_act together can significantly lower performance in AutoGPTQ CUDA. You might want to try AutoGPTQ Triton mode instead (Linux only.) * `gptq_model-4bit-128g.safetensors` * Works with AutoGPTQ in CUDA or Triton modes. * Does NOT work with [ExLlama](https://github.com/turboderp/exllama) as it's not a Llama model. * Untested with GPTQ-for-LLaMa. * Works with text-generation-webui, including one-click-installers. * Parameters: Groupsize = 128. Act Order / desc_act = True. <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov. **Patreon special mentions**: zynix , ya boyyy, Trenton Dambrowitz, Imad Khwaja, Alps Aficionado, chris gileta, John Detwiler, Willem Michiel, RoA, Mano Prime, Rainer Wilmers, Fred von Graf, Matthew Berman, Ghost , Nathan LeClaire, Iucharbius , Ai Maven, Illia Dulskyi, Joseph William Delisle, Space Cruiser, Lone Striker, Karl Bernard, Eugene Pentland, Greatston Gnanesh, Jonathan Leane, Randy H, Pierre Kircher, Willian Hasse, Stephen Murray, Alex , terasurfer , Edmond Seymore, Oscar Rangel, Luke Pendergrass, Asp the Wyvern, Junyu Yang, David Flickinger, Luke, Spiking Neurons AB, subjectnull, Pyrater, Nikolai Manek, senxiiz, Ajan Kanaga, Johann-Peter Hartmann, Artur Olbinski, Kevin Schuppel, Derek Yates, Kalila, K, Talal Aujan, Khalefa Al-Ahmad, Gabriel Puliatti, John Villwock, WelcomeToTheClub, Daniel P. Andersen, Preetika Verma, Deep Realms, Fen Risland, trip7s trip, webtim, Sean Connelly, Michael Levine, Chris McCloskey, biorpg, vamX, Viktor Bowallius, Cory Kujawski. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: BigScience's BLOOMZ 176B ![xmtf](https://github.com/bigscience-workshop/xmtf/blob/master/xmtf_banner.png?raw=true) # Table of Contents 1. [Model Summary](#model-summary) 2. [Use](#use) 3. [Limitations](#limitations) 4. [Training](#training) 5. [Evaluation](#evaluation) 7. [Citation](#citation) # Model Summary > We present BLOOMZ & mT0, a family of models capable of following human instructions in dozens of languages zero-shot. We finetune BLOOM & mT5 pretrained multilingual language models on our crosslingual task mixture (xP3) and find the resulting models capable of crosslingual generalization to unseen tasks & languages. - **Repository:** [bigscience-workshop/xmtf](https://github.com/bigscience-workshop/xmtf) - **Paper:** [Crosslingual Generalization through Multitask Finetuning](https://arxiv.org/abs/2211.01786) - **Point of Contact:** [Niklas Muennighoff](mailto:[email protected]) - **Languages:** Refer to [bloom](https://huggingface.co/bigscience/bloom) for pretraining & [xP3](https://huggingface.co/datasets/bigscience/xP3) for finetuning language proportions. It understands both pretraining & finetuning languages. - **BLOOMZ & mT0 Model Family:** <div class="max-w-full overflow-auto"> <table> <tr> <th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/bigscience/xP3>xP3</a>. Recommended for prompting in English. </tr> <tr> <td>Parameters</td> <td>300M</td> <td>580M</td> <td>1.2B</td> <td>3.7B</td> <td>13B</td> <td>560M</td> <td>1.1B</td> <td>1.7B</td> <td>3B</td> <td>7.1B</td> <td>176B</td> </tr> <tr> <td>Finetuned Model</td> <td><a href=https://huggingface.co/bigscience/mt0-small>mt0-small</a></td> <td><a href=https://huggingface.co/bigscience/mt0-base>mt0-base</a></td> <td><a href=https://huggingface.co/bigscience/mt0-large>mt0-large</a></td> <td><a href=https://huggingface.co/bigscience/mt0-xl>mt0-xl</a></td> <td><a href=https://huggingface.co/bigscience/mt0-xxl>mt0-xxl</a></td> <td><a href=https://huggingface.co/bigscience/bloomz-560m>bloomz-560m</a></td> <td><a href=https://huggingface.co/bigscience/bloomz-1b1>bloomz-1b1</a></td> <td><a href=https://huggingface.co/bigscience/bloomz-1b7>bloomz-1b7</a></td> <td><a href=https://huggingface.co/bigscience/bloomz-3b>bloomz-3b</a></td> <td><a href=https://huggingface.co/bigscience/bloomz-7b1>bloomz-7b1</a></td> <td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a></td> </tr> </tr> <tr> <th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/bigscience/xP3mt>xP3mt</a>. Recommended for prompting in non-English.</th> </tr> <tr> <td>Finetuned Model</td> <td></td> <td></td> <td></td> <td></td> <td><a href=https://huggingface.co/bigscience/mt0-xxl-mt>mt0-xxl-mt</a></td> <td></td> <td></td> <td></td> <td></td> <td><a href=https://huggingface.co/bigscience/bloomz-7b1-mt>bloomz-7b1-mt</a></td> <td><a href=https://huggingface.co/bigscience/bloomz-mt>bloomz-mt</a></td> </tr> <th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/Muennighoff/P3>P3</a>. Released for research purposes only. Strictly inferior to above models!</th> </tr> <tr> <td>Finetuned Model</td> <td></td> <td></td> <td></td> <td></td> <td><a href=https://huggingface.co/bigscience/mt0-xxl-p3>mt0-xxl-p3</a></td> <td></td> <td></td> <td></td> <td></td> <td><a href=https://huggingface.co/bigscience/bloomz-7b1-p3>bloomz-7b1-p3</a></td> <td><a href=https://huggingface.co/bigscience/bloomz-p3>bloomz-p3</a></td> </tr> <th colspan="12">Original pretrained checkpoints. Not recommended.</th> <tr> <td>Pretrained Model</td> <td><a href=https://huggingface.co/google/mt5-small>mt5-small</a></td> <td><a href=https://huggingface.co/google/mt5-base>mt5-base</a></td> <td><a href=https://huggingface.co/google/mt5-large>mt5-large</a></td> <td><a href=https://huggingface.co/google/mt5-xl>mt5-xl</a></td> <td><a href=https://huggingface.co/google/mt5-xxl>mt5-xxl</a></td> <td><a href=https://huggingface.co/bigscience/bloom-560m>bloom-560m</a></td> <td><a href=https://huggingface.co/bigscience/bloom-1b1>bloom-1b1</a></td> <td><a href=https://huggingface.co/bigscience/bloom-1b7>bloom-1b7</a></td> <td><a href=https://huggingface.co/bigscience/bloom-3b>bloom-3b</a></td> <td><a href=https://huggingface.co/bigscience/bloom-7b1>bloom-7b1</a></td> <td><a href=https://huggingface.co/bigscience/bloom>bloom</a></td> </tr> </table> </div> # Use ## Intended use We recommend using the model to perform tasks expressed in natural language. For example, given the prompt "*Translate to English: Je t’aime.*", the model will most likely answer "*I love you.*". Some prompt ideas from our paper: - 一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。你认为这句话的立场是赞扬、中立还是批评? - Suggest at least five related search terms to "Mạng neural nhân tạo". - Write a fairy tale about a troll saving a princess from a dangerous dragon. The fairy tale is a masterpiece that has achieved praise worldwide and its moral is "Heroes Come in All Shapes and Sizes". Story (in Spanish): - Explain in a sentence in Telugu what is backpropagation in neural networks. **Feel free to share your generations in the Community tab!** ## How to use ### CPU <details> <summary> Click to expand </summary> ```python # pip install -q transformers from transformers import AutoModelForCausalLM, AutoTokenizer checkpoint = "bigscience/bloomz" tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = AutoModelForCausalLM.from_pretrained(checkpoint) inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt") outputs = model.generate(inputs) print(tokenizer.decode(outputs[0])) ``` </details> ### GPU <details> <summary> Click to expand </summary> ```python # pip install -q transformers accelerate from transformers import AutoModelForCausalLM, AutoTokenizer checkpoint = "bigscience/bloomz" tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = AutoModelForCausalLM.from_pretrained(checkpoint, torch_dtype="auto", device_map="auto") inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt").to("cuda") outputs = model.generate(inputs) print(tokenizer.decode(outputs[0])) ``` </details> ### GPU in 8bit <details> <summary> Click to expand </summary> ```python # pip install -q transformers accelerate bitsandbytes from transformers import AutoModelForCausalLM, AutoTokenizer checkpoint = "bigscience/bloomz" tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto", load_in_8bit=True) inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt").to("cuda") outputs = model.generate(inputs) print(tokenizer.decode(outputs[0])) ``` </details> <!-- Necessary for whitespace --> ### # Limitations **Prompt Engineering:** The performance may vary depending on the prompt. For BLOOMZ models, we recommend making it very clear when the input stops to avoid the model trying to continue it. For example, the prompt "*Translate to English: Je t'aime*" without the full stop (.) at the end, may result in the model trying to continue the French sentence. Better prompts are e.g. "*Translate to English: Je t'aime.*", "*Translate to English: Je t'aime. Translation:*" "*What is "Je t'aime." in English?*", where it is clear for the model when it should answer. Further, we recommend providing the model as much context as possible. For example, if you want it to answer in Telugu, then tell the model, e.g. "*Explain in a sentence in Telugu what is backpropagation in neural networks.*". # Training ## Model - **Architecture:** Same as [bloom](https://huggingface.co/bigscience/bloom), also refer to the `config.json` file - **Finetuning steps:** 498 - **Finetuning tokens:** 2.09 billion - **Finetuning layout:** 72x pipeline parallel, 1x tensor parallel, 4x data parallel - **Precision:** bfloat16 ## Hardware - **CPUs:** AMD CPUs with 512GB memory per node - **GPUs:** 288 A100 80GB GPUs with 8 GPUs per node (36 nodes) using NVLink 4 inter-gpu connects, 4 OmniPath links - **Communication:** NCCL-communications network with a fully dedicated subnet ## Software - **Orchestration:** [Megatron-DeepSpeed](https://github.com/bigscience-workshop/Megatron-DeepSpeed) - **Optimizer & parallelism:** [DeepSpeed](https://github.com/microsoft/DeepSpeed) - **Neural networks:** [PyTorch](https://github.com/pytorch/pytorch) (pytorch-1.11 w/ CUDA-11.5) - **FP16 if applicable:** [apex](https://github.com/NVIDIA/apex) # Evaluation We refer to Table 7 from our [paper](https://arxiv.org/abs/2211.01786) & [bigscience/evaluation-results](https://huggingface.co/datasets/bigscience/evaluation-results) for zero-shot results on unseen tasks. The sidebar reports zero-shot performance of the best prompt per dataset config. # Citation ```bibtex @article{muennighoff2022crosslingual, title={Crosslingual generalization through multitask finetuning}, author={Muennighoff, Niklas and Wang, Thomas and Sutawika, Lintang and Roberts, Adam and Biderman, Stella and Scao, Teven Le and Bari, M Saiful and Shen, Sheng and Yong, Zheng-Xin and Schoelkopf, Hailey and others}, journal={arXiv preprint arXiv:2211.01786}, year={2022} } ```
TheBloke/BLOOMChat-176B-v1-GPTQ
TheBloke
2023-07-07T22:03:31Z
13
31
transformers
[ "transformers", "bloom", "text-generation", "license:other", "autotrain_compatible", "region:us" ]
text-generation
2023-07-05T16:33:58Z
--- license: other inference: false --- <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # Sambanova Systems' BLOOMChat 1.0 These files are GPTQ 4bit model files for [Sambanova Systems' BLOOMChat 1.0](https://huggingface.co/sambanovasystems/BLOOMChat-176B-v1). It is the result of quantising to 4-bit using [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ). **This is a BIG model! 2 x 80GB or 3 x 48GB GPUs are required** ## Important note: files must be joined before use It is not currently possible to shard GPTQ files, therefore the model file is one single 94GB `safetensors` file. Huggingface Hub has a 50GB per-file limit. I have therefore been forced to split the file in to three parts for upload. I did this using the simple *nix command `split`. To join the files on any *nix system, you can run: ``` cat gptq_model-4bit--1g.JOINBEFOREUSE.split-*.safetensors > gptq_model-4bit--1g.safetensors ``` To join the files on Windows, open a Command Prompt and run: ``` COPY /B gptq_model-4bit--1g.JOINBEFOREUSE.split-a.safetensors + gptq_model-4bit--1g.JOINBEFOREUSE.split-b.safetensors + gptq_model-4bit--1g.JOINBEFOREUSE.split-c.safetensors gptq_model-4bit--1g.safetensors ``` Or for Python code for joining the files, see the Python section below. The SHA256SUM of the joined file will be: ``` 9cc359fa266d2523566e818ca58e8782718b25cc2e714cb5449b7841e1c59830 gptq_model-4bit--1g.safetensors ``` Once you have the joined file, you can safely delete `gptq_model-4bit--1g.JOINBEFOREUSE.split-*.safetensors`. ## Repositories available * [4-bit GPTQ model for GPU inference](https://huggingface.co/TheBloke/BLOOMChat-176B-v1-GPTQ) * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/sambanovasystems/BLOOMChat-176B-v1) ## Two files provided - separate branches - Main branch: `gptq_model-4bit--1g.safetensors` - Group Size = None - Desc Act (act-order) = True - This version will use the least possible VRAM, and should have higher inference performance in CUDA mode - Branch `group_size_128g`: `gptq_model-4bit-128g.safetensors` - Group Size = 128g - Desc Act (act-oder) = True - This version will use more VRAM, which shouldn't be a problem as it shouldn't exceed 2 x 80GB or 3 x 48GB cards. - However CUDA inference performance is likely to be a lot slower, possibly necessitating the use of Triton mode. By default you will download the first file, unless you choose to download from branch `group_size_128g`. ## Prompt template: ``` <human>: prompt <bot>: ``` ## How to easily download and use this model in text-generation-webui Please make sure you're using the latest version of text-generation-webui. Note 1: this is a non-Llama model which cannot be used with ExLlama. Use Loader: AutoGPTQ. Note 2: As described above, you must join the files after downloading and before loading in text-generation-webui. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/BLOOMChat-176B-v1-GPTQ`. - If you would rather download the group_size 128g version, enter `TheBloke/BLOOMChat-176B-v1-GPTQ:group_size_128g` 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". This is a huge model so it may take a while! 5. Now follow the steps described above to join the model to get a single `.safetensors` file. 6. Untick **Autoload model**. 7. In the top left, click the refresh icon next to **Model**. 8. In the **Model** dropdown, choose the model you just downloaded: `BLOOMChat-176B-v1-GPTQ` 9. Make sure Loader is set to AutGPTQ. 10. This model cannot load on one GPU, so you should set **GPU Memory** accordingly. - If using two 80GB GPUs, try: GPU0 = 60GB, GPU1 = 79GB - If using three 48GB GPUs, try: GPU0 = 30GB, GPU1 = 47GB, GPU2 = 47GB 11. Click **Save settings** to save your settings, and then **Reload** to load the model. 12. The model will load, and is now ready for use! 13. Once you're ready, click the **Text Generation tab** and enter a prompt to get started! ## How to use this GPTQ model from Python code First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) installed: `GITHUB_ACTIONS=true pip install auto-gptq` Because this model has to be joined locally, you must first download it. Example download code: ```python from huggingface_hub import snapshot_download snapshot_download(repo_id="TheBloke/BLOOMChat-176B-v1-GPTQ", local_dir="/workspace/models/BLOOMChat-176B-v1-GPTQ", local_dir_use_symlinks=False) ``` If you want to download the group_size 128g file instead, add `revision="group_size_128g"` to the above command. Now join the three `split` files, which can be done with the following Python code: ```python import glob # Get the list of all files matching the pattern files = sorted(glob.glob('gptq_model-4bit--1g.JOINBEFOREUSE.split-*.safetensors')) # Open the output file in binary write mode with open('gptq_model-4bit--1g.safetensors', 'wb') as outfile: for filename in files: with open(filename, 'rb') as infile: outfile.write(infile.read()) ``` Then try the following example code: ```python from transformers import AutoTokenizer, pipeline, logging from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig import argparse # Use the local path you downloaded the model to and joined the split files in model_name_or_path = "/workspace/models/BLOOMChat-176B-v1-GPTQ" model_basename = "gptq_model-4bit--1g" use_triton = False tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) model = AutoGPTQForCausalLM.from_quantized(model_name_or_path, model_basename=model_basename, max_memory={0: '60GiB', 1: '79GiB'} # max_memory is for 2 x 80GB GPUs; adjust if your config is different! use_safetensors=True, trust_remote_code=False, use_triton=use_triton, quantize_config=None) prompt = "Write a story about llamas" prompt_template=f'''<human>: {prompt} <bot>: ''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline # Prevent printing spurious transformers error when using pipeline with AutoGPTQ logging.set_verbosity(logging.CRITICAL) print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, temperature=0.7, top_p=0.95, repetition_penalty=1.15 ) print(pipe(prompt_template)[0]['generated_text']) ``` ## Provided files ## Main branch: **gptq_model-4bit--1g.safetensors** This will work with AutoGPTQ. It is untested with GPTQ-for-LLaMa. It will *not* work with ExLlama. It was created with group_size none (-1) to reduce VRAM usage, and with --act-order (desc_act) to improve accuracy of responses. * `gptq_model-4bit-128g.safetensors` * Works with AutoGPTQ in CUDA or Triton modes. * Does NOT work with [ExLlama](https://github.com/turboderp/exllama) as it's not a Llama model. * Untested with GPTQ-for-LLaMa. * Works with text-generation-webui, including one-click-installers. * Parameters: Groupsize = -1. Act Order / desc_act = True. ## Branch `group_size_128g` **gptq_model-4bit-128g.safetensors** This will work with AutoGPTQ. It is untested with GPTQ-for-LLaMa. It will *not* work with ExLlama. It was created with both group_size 128g and --act-order (desc_act) for even higher inference accuracy, at the cost of increased VRAM usage. Because we already need 2 x 80GB or 3 x 48GB GPUs, I don't expect the increased VRAM usage to change the GPU requirements. * `gptq_model-4bit-128g.safetensors` * Works with AutoGPTQ in CUDA or Triton modes. * Does NOT work with [ExLlama](https://github.com/turboderp/exllama) as it's not a Llama model. * Untested with GPTQ-for-LLaMa. * Works with text-generation-webui, including one-click-installers. * Parameters: Groupsize = 128. Act Order / desc_act = True. <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov. **Patreon special mentions**: zynix , ya boyyy, Trenton Dambrowitz, Imad Khwaja, Alps Aficionado, chris gileta, John Detwiler, Willem Michiel, RoA, Mano Prime, Rainer Wilmers, Fred von Graf, Matthew Berman, Ghost , Nathan LeClaire, Iucharbius , Ai Maven, Illia Dulskyi, Joseph William Delisle, Space Cruiser, Lone Striker, Karl Bernard, Eugene Pentland, Greatston Gnanesh, Jonathan Leane, Randy H, Pierre Kircher, Willian Hasse, Stephen Murray, Alex , terasurfer , Edmond Seymore, Oscar Rangel, Luke Pendergrass, Asp the Wyvern, Junyu Yang, David Flickinger, Luke, Spiking Neurons AB, subjectnull, Pyrater, Nikolai Manek, senxiiz, Ajan Kanaga, Johann-Peter Hartmann, Artur Olbinski, Kevin Schuppel, Derek Yates, Kalila, K, Talal Aujan, Khalefa Al-Ahmad, Gabriel Puliatti, John Villwock, WelcomeToTheClub, Daniel P. Andersen, Preetika Verma, Deep Realms, Fen Risland, trip7s trip, webtim, Sean Connelly, Michael Levine, Chris McCloskey, biorpg, vamX, Viktor Bowallius, Cory Kujawski. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: Sambanova Systems' BLOOMChat V1.0 # BLOOMChat V1.0 <!-- Provide a quick summary of what the model is/does. --> BLOOMChat is a 176 billion parameter multilingual chat model. It is instruction tuned from [BLOOM (176B)](https://huggingface.co/bigscience/bloom) on assistant-style conversation datasets and supports conversation, question answering and generative answers in multiple languages. ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [SambaNova Systems](https://sambanova.ai/) - **Co-developed by:** [Together Computer](https://www.together.xyz/) - **Model type:** Language Model - **Language(s):** Multiple; see [training data from BLOOM](https://huggingface.co/bigscience/bloom#training-data) - **License:** BLOOMChat-176B LICENSE v1.0 - **Instruction Tuned from model:** [BigScience Group BLOOM](https://huggingface.co/bigscience/bloom) ### Basic Information <!-- Provide the basic links for the model. --> - **Blog Post**: [Link](https://sambanova.ai/blog/introducing-bloomchat-176b-the-multilingual-chat-based-llm/) - **Discord**: [Link](https://discord.com/invite/8z2Pe7cpRv) - **HF Hosting**: [Chat with me!](https://huggingface.co/spaces/sambanovasystems/BLOOMChat) - **Github**: [Link](https://github.com/sambanova/bloomchat) ### Licensing To increase accessibility and to support the open-source community, SambaNova is releasing BLOOMChat under a modified version of the Apache 2.0 license, which includes use-based restrictions from BLOOM’s RAIL license. While use-based restrictions are necessarily passed through, there are no blanket restrictions on reuse, distribution, commercialization or adaptation. [Please review SambaNova’s BLOOMChat-176B License](LICENSE) ## Uses <details> <summary>Click to expand</summary> <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> This model is intended for commercial and research use. ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> BLOOMChat should NOT be used for: - Mission-critical applications - Applications that involve the safety of others - Making highly important decisions - Important automated pipelines This model is still in early development and can be prone to mistakes and hallucinations, there is still room for improvement. This model is intended to provide the community with a multilingual chat LLM baseline. ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases, limitations, and restrictions of the model, which are listed down at the bottom of the page. </details> --- ## How to Get Started with the Model <details> <summary>Click to expand</summary> ### Loading in model with Huggingface ```python from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("sambanovasystems/BLOOMChat-176B-v1") model = AutoModelForCausalLM.from_pretrained("sambanovasystems/BLOOMChat-176B-v1", device_map="auto", torch_dtype="auto") ``` ### Quick Start Inference on SambaNova's in-house Reconfigurable Dataflow Unit (RDU) The inference code to run the model can be found our [github repo](https://github.com/sambanova/bloomchat/blob/main/rdu_quick_start/inference.py). This code requires the [SambaFlow](https://docs.sambanova.ai/developer/latest/sambaflow-intro.html) SDK to execute. For those interested in running models on RDUs, [please feel free to get in touch](https://sambanova.ai/getstarted). ### Quick Start Inference on GPU First create a python virtual environment for these packages ``` python3 -m venv bloomchat_venv source bloomchat_venv/bin/activate pip install --upgrade pip ``` <!-- Please follow this section [Inference solutions for BLOOM 176B](https://github.com/huggingface/transformers-bloom-inference#bloom-inference-via-command-line) in the Huggingface Tutorial for environment set up and stop before the [BLOOM inference via command-line ](https://github.com/huggingface/transformers-bloom-inference#bloom-inference-via-command-line) section. --> ``` pip install flask flask_api gunicorn pydantic accelerate huggingface_hub>=0.9.0 deepspeed>=0.7.3 deepspeed-mii==0.0.2 ``` And then ``` pip install transformers==4.27.0 ``` You will see messages like this ``` ERROR: deepspeed-mii 0.0.2 has requirement transformers==4.21.2, but you'll have transformers 4.27.0 which is incompatible. Installing collected packages: transformers Found existing installation: transformers 4.21.2 Uninstalling transformers-4.21.2: Successfully uninstalled transformers-4.21.2 Successfully installed transformers-4.27.0 ``` Now let's git clone the [huggingface/transformers-bloom-inference](https://github.com/huggingface/transformers-bloom-inference) repo. ``` git clone https://github.com/huggingface/transformers-bloom-inference.git cd transformers-bloom-inference/ ``` And then you need to modify two files in this [transformers-bloom-inference](https://github.com/huggingface/transformers-bloom-inference) repo: - Modifying `inference_server/models/hf_accelerate.py` - This is because for our testing of this repo we used 4 80GB A100 GPUs and would run into memory issues - Modifying `inference_server/cli.py` - This is because the model was trained using specific human, bot tags - Trailing spaces may lead to subpar performance Modifications for `inference_server/models/hf_accelerate.py`: ```diff diff --git a/inference_server/models/hf_accelerate.py b/inference_server/models/hf_accelerate.py index 9be3c3f..a8ecb1d 100644 --- a/inference_server/models/hf_accelerate.py +++ b/inference_server/models/hf_accelerate.py @@ -1,4 +1,5 @@ from argparse import Namespace +from accelerate.utils.modeling import get_max_memory import torch @@ -12,6 +13,12 @@ class HFAccelerateModel(Model): kwargs = {"pretrained_model_name_or_path": args.model_name, "device_map": "auto"} + original_max_memory_dict = get_max_memory() + + reduce_max_memory_dict = {device_key: int(original_max_memory_dict[device_key] * 0.85) for device_key in original_max_memory_dict} + + kwargs["max_memory"] = reduce_max_memory_dict + if get_world_size() > 1: kwargs["device_map"] = "balanced_low_0" ``` Modifications for `inference_server/cli.py`: ```diff diff --git a/inference_server/cli.py b/inference_server/cli.py index fc903d5..5450236 100644 --- a/inference_server/cli.py +++ b/inference_server/cli.py @@ -22,6 +22,9 @@ def main() -> None: while True: input_text = input("Input text: ") + input_text = input_text.strip() + modified_input_text = f"<human>: {input_text}\n<bot>:" + if input("change generate_kwargs? [y/n] ") == "y": while True: try: @@ -33,7 +36,7 @@ def main() -> None: print("message =", e_message) continue - response = model.generate(text=[input_text], generate_kwargs=generate_kwargs) + response = model.generate(text=[modified_input_text], generate_kwargs=generate_kwargs) print_rank_0("Output text:", response.text[0]) print_rank_0("Generated tokens:", response.num_generated_tokens[0]) ``` And now you are good to go! Running command for bf16, NO sampling ``` python -m inference_server.cli --model_name sambanovasystems/BLOOMChat-176B-v1 --model_class AutoModelForCausalLM --dtype bf16 --deployment_framework hf_accelerate --generate_kwargs '{"do_sample": false, "max_new_tokens": 512}' ``` Running command for bf16, YES sampling ``` python -m inference_server.cli --model_name sambanovasystems/BLOOMChat-176B-v1 --model_class AutoModelForCausalLM --dtype bf16 --deployment_framework hf_accelerate --generate_kwargs '{"do_sample": true, "temperature": 0.8, "repetition_penalty": 1.2, "top_p": 0.9, "max_new_tokens": 512}' ``` --- Running command for int8 (sub optimal performance, but fast inference time) NO sampling: ``` python -m inference_server.cli --model_name sambanovasystems/BLOOMChat-176B-v1 --model_class AutoModelForCausalLM --dtype int8 --deployment_framework hf_accelerate --generate_kwargs '{"do_sample": false, "max_new_tokens": 512}' ``` Running command for int8 (sub optimal performance, but fast inference time) YES sampling: ``` python -m inference_server.cli --model_name sambanovasystems/BLOOMChat-176B-v1 --model_class AutoModelForCausalLM --dtype int8 --deployment_framework hf_accelerate --generate_kwargs '{"do_sample": true, "temperature": 0.8, "repetition_penalty": 1.2, "top_p": 0.9, "max_new_tokens": 512}' ``` **DISCLAIMER:** When using int8, the results will be subpar compared to bf16 as the model is being [quantized](https://huggingface.co/blog/hf-bitsandbytes-integration#introduction-to-model-quantization). ### Suggested Inference Parameters - Temperature: 0.8 - Repetition penalty: 1.2 - Top-p: 0.9 - Max generated tokens: 512 ### Suggested Prompts To Try in GPU Tutorial ``` Input text: Write a script in which Bob accidentally breaks his dad's guitar ``` ``` Input text: Create an itemized list of tasks to complete to start a clothing brand ``` ``` Input text: 十七岁的风是什么颜色的? ``` </details> --- ## Some example completions for English <details> <summary>Click to expand</summary> ``` <human>: What does it mean for the Fed to raise or lower interest rates? <bot>: The Federal Reserve (also known as the Fed) is the central banking system of the United States. It was created in 1913, and is an independent agency of the federal government. The Fed has a dual mandate of maximizing employment and stabilizing prices. The Fed sets interest rates, and can influence the money supply and the credit markets through its actions. Raising interest rates means that the Fed is trying to reduce the money supply in the economy, and is trying to discourage borrowing and encourage saving. Lowering interest rates means that the Fed is trying to stimulate economic growth by encouraging borrowing and reducing the cost of credit. The Fed raises or lowers interest rates in order to influence the economy and achieve its goals for employment and price stability. Interest rates are one of the primary tools that the Fed uses to achieve its goals. ``` ``` <human>: Write an engaging and well-written property listing description for selling a house. Address of the house and some of the details are given to you. Fill in the information gap with hallucinations if needed. Property Address: 412 Monterey Ave, Capitola, CA 95010 Parking: 3 cars + a finished single car garage Details: - open floorplan - window with views to park/mountains - Kitchen with shaker cabinetry and white Calcatta quartz counters <bot>: This home at 412 Monterey Ave, Capitola, CA 95010 is the perfect place to call home. With its open floor plan, it provides the perfect space for entertaining guests. The kitchen is equipped with shaker cabinetry and white Calcatta quartz counters, providing the perfect space to create culinary masterpieces. The window with views to the park and mountains provide a beautiful view of the outdoors, making this home feel like a piece of paradise. The three-car garage and finished single car garage provide ample parking space, making this home a must-see for anyone looking for a new home. ``` ``` <human>: What is artificial intelligence? Explain it so an 8 year old will understand. <bot>: Artificial Intelligence (AI) is a computer program or machine that can do things that would normally require human intelligence, such as recognizing patterns, making decisions, and learning from experience. AI can be found in many places, such as self-driving cars, smartphones, and even robots. AI is used to make things easier for people, like when your smartphone can translate text into another language or when your voice-controlled speaker can play music for you. AI works by using algorithms and big data to learn and improve over time. For example, self-driving cars use AI to analyze the environment around them and make decisions about how to drive. The car gathers data from its sensors, processes it using algorithms, and adjusts its behavior accordingly. This process is called machine learning. AI is still a developing technology, and there are still many things that AI programs cannot do yet. However, as AI continues to evolve, it is likely that it will become more and more capable of performing tasks that were once thought to require human intelligence. ``` </details> --- ## Some example completions for Multilingual <details> <summary>Click to expand</summary> #### Arabic ``` <human>: هل العمل نصف دوام فكرة جيدة لمراهقين بين عمر ١٥ و ١٩ عام <bot>: نعم، يمكن أن تكون فكرة جيدة لبعض المراهقين في هذا العمر. هناك العديد من الفوائد التي قد يحققها المراهقون إذا عملوا بنصف دوام: - يساعدهم على تعلم المسؤولية وإدارة الوقت: عندما يعمل المراهق بنصف دوام، فإنه يتعلم كيفية إدارة وقته بشكل فعال وكيفية تقسيم ساعات عمله إلى مهام أصغر. - يحسن مهارات التواصل والعمل الجماعي: غالبًا ما يكون مكان العمل بيئة متعددة الثقافات حيث يلتقي المراهقون بمجموعة متنوعة من الأشخاص ويتعلمون كيفية التعاون معهم وتبادل الأفكار. - يوفر فرصة للتعرف على عالم العمل: يتيح العمل للمراهق التعرف على عالم العمل واكتساب خبرة عملية قيمة ستساعده لاحقًا عند البحث عن وظيفة أو بدء حياته المهنية. - يعزز الشعور بالمسؤولية والثقة بالنفس: عندما يعمل المراهق بنصف دوام ويحصل على دخل خاص به، فإنه يكتسب شعورًا أكبر بالمسؤولية والثقة بالنفس. ومع ذلك، لا ينبغي إجبار المراهقين على العمل بنصف دوام دون موافقتهم المسبقة. يجب أيضًا مراعاة أعمارهم وقدراتهم البدنية والعقلية قبل توظيفهم. ``` **DISCLAIMER**: Text ordering may be off for Arabic due to code block not being able to render #### Chinese ``` <human>: 请向我推荐一部周末看的电影并说明原因。 <bot>: “强烈向您推荐《美丽人生》这部影片。该片由罗伯托·贝尼尼自导自演,讲述了二战时期一对犹太父子被送进了纳粹集中营,父亲用自己的想象力为儿子编织了一个美好童真的世界,让儿子的心灵没有受到丝毫伤害的故事。 该片被誉为“最伟大的战争片”、“人类有史以来最佳影片之一”,豆瓣评分高达9.6分。它不仅是一部感人的亲情片,更是一部富含哲理的励志片和人生启示录。片中父亲的乐观豁达、幽默风趣的人生态度,对任何事都抱有积极热情的态度,即使在残酷无比的集中营里也是如此,深深地感染着观众。同时,这也是一部关于爱与希望的电影,让我们更加珍惜当下所拥有的一切,学会如何去爱身边的每一个人。 总之,这是一部值得一看再看的好电影,相信您一定会喜欢的! ``` #### French ``` <human>: Il y a t’il un petit-déjeuner de 700-1000 calories sans oeuf mais qui inclut des protéines. <bot>: Oui, voici une recette de petit déjeuner à base d'avoine et de fromage blanc : Ingrédients : - 1 tasse (250 ml) de lait d'amande ou autre lait végétal - 2 cuillères à soupe bombées (30 g) de flocons d'avoine - 1/2 banane mûre écrasée - 1/4 de pomme fraîche épluchée et coupée en dés - 1 cuillère à soupe de beurre d'amande ou autre purée d'oléagineux - 1 cuillère à café de cannelle moulue - 1 pincée de sel Préparation : 1. Dans une petite casserole, faites chauffer le lait jusqu'à ce qu'il soit chaud. 2. Ajoutez les flocons d'avoine, la banane, la pomme, le beurre d'amande, la cannelle et le sel dans le bol du mixeur et mélangez bien pour obtenir une pâte homogène. 3. Versez le mélange dans le lait chaud et remuez rapidement avec une spatule en bois pour éviter que les flocons d'avoine ne s'agglutinent au fond de la casserole. 4. Laissez mijoter sur feu doux pendant environ 5 minutes tout en continuant de remuer régulièrement. 5. Transférez ensuite le porridge dans un bol et servez immédiatement. Vous pouvez ajouter quelques fruits rouges frais pour décorer et apporter encore plus de gourmandise à votre petit déjeuner ! Bon appétit ! ``` #### Hindi ``` <human>: दिल्ली से जयपुर कैसे जाये? बस से या कार से? दोनो के बारे मे विस्तार मे बताये. <bot>: दिल्ली से जयपुर जाने का सबसे आसान और तेज़ तरीका है कि आप अपनी खुद की कार लेकर जाएं, क्योंकि यह आपको अपने समय पर निर्भर रहने की स्वतंत्रता देता है। हालांकि, यदि आपके पास अपनी कार नहीं है, तो आप दिल्ली से जयपुर तक एक निजी वाहन किराए पर ले सकते हैं। यदि आप सार्वजनिक परिवहन (बस) का उपयोग करना चाहते हैं, तो दिल्ली में कई सरकारी और निजी बस सेवाएं उपलब्ध हैं जो जयपुर को जोड़ती हैं। दिल्ली से जयपुर जाने वाली कुछ प्रमुख बसें हैं- 1. राजस्थान रोडवेज: ये राज्य सरकार द्वारा संचालित बसें हैं जो दिल्ली से जयपुर के बीच चलती हैं। ये नियमित रूप से चलती हैं और इनके लिए अग्रिम आरक्षण आवश्यक नहीं है। 2. ALSA: यह एक निजी कंपनी है जो दिल्ली से जयपुर के बीच नियमित रूप से चलती है। इसके टिकटों की बुकिंग ऑनलाइन और ऑफलाइन दोनों माध्यमों से की जा सकती है। 3. RedBus: यह एक ऑनलाइन बस बुकिंग वेबसाइट है जहां से आप दिल्ली से जयपुर के बीच चलनेवाली विभिन्न कंपनियों की बसों में सीट आरक्षित करा सकते हैं। 4. Ola and Uber: इन ऐप्स का उपयोग करके आप दिल्ली से जयपुर तक कैब राइड बुक कर सकते हैं, लेकिन ध्यान रखें कि यह विकल्प सस्ता नहीं होगा। दिल्ली से जयपुर जाने का सबसे अच्छा विकल्प आपकी व्यक्तिगत आवश्यकताओं, बजट, और उपलब्धता पर निर्भर करता है। यदि आप अपनी खुद की कार रखते हैं, तो इसका लाभ उठाएं अन्यथा किसी भी सार्वजनिक परिवहन साधन का चयन करें जो आपकी आवश्यकताओं को पूरा करे। ``` #### Spanish ``` <human>: ¿Me puedes dar algunas recomendaciones de juegos? Estoy buscando juegos de rol con un sistema de magia único, un sistema de construcción único y una buena historia. Por favor limítate a juegos que se puedan jugar en PC directamente, o por emulación. <bot>: Claro! Aquí te dejo tres recomendaciones: 1. The Outer Worlds: es un juego de rol de ciencia ficción con un sistema de magia basado en la tecnología alienígena que permite al jugador personalizar sus habilidades para adaptarse a su estilo de juego. 2. Divinity: Original Sin 2 - Definitive Edition: este juego combina elementos de RPG con estrategia en tiempo real (RTS), lo que le da al jugador un control total sobre cómo construir y mejorar su personaje y el mundo que lo rodea. 3. Pillars of Eternity: esta aventura de fantasía oscura tiene lugar en un vasto mundo abierto donde los jugadores pueden elegir entre diferentes clases y desarrollar relaciones con otros personajes del juego. La mecánica de combate está basada en turnos y utiliza hechizos y armas cuerpo a cuerpo. Estos son solo algunos ejemplos de juegos que podrían interesarte según tus criterios. Hay muchos más títulos disponibles en el mercado que podrían ajustarse a tu gusto, así que no dudes en explorar otras opciones si estos juegos no cumplen con tus expectativas. ``` </details> --- ## Evaluation Graphs <details> <summary>Click to expand</summary> <!-- This section describes the evaluation protocols and provides the results. --> ![Human evaluation](images/Human_evaluation.png) <figure style="text-align:center;"> <figcaption><b>BLOOMChat vs Baselines Model in Human Preference Rankings</b></figcaption> </figure> <br> ![Human evaluation against GPT4](images/Human_evaluation_gpt4.png) <figure style="text-align:center;"> <figcaption><b>BLOOMChat vs GPT-4 in Human Preference Ranking</b></figcaption> </figure> <br> ![Multilingual evaluation](images/Multilingual_capabilities_comparison.png) <figure style="text-align:center;"> <figcaption><b>BLOOMChat surpasses other Bloom variants and state-of-the-art open-source chat models in translation tasks [NOTE: Evaluation of the BLOOM and BLOOMZ in WMT18 en->zh zh->en used (human, bot) ChatML tags due to an unintentional configuration. Results might be suboptimal.]</b></figcaption> </figure> <br> </details> --- ## Training Details <details> <summary>Click to expand</summary> ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> - [OIG dataset from OpenChatKit](https://huggingface.co/datasets/laion/OIG) - [Dolly 2.0](https://huggingface.co/datasets/databricks/databricks-dolly-15k) - [Oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1) ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> We trained BLOOMChat with [SambaNova DataScale systems](https://sambanova.ai/products/datascale/) with SambaNova's in-house Reconfigurable Dataflow Unit (RDU). We started from [BLOOM (176B)](https://huggingface.co/bigscience/bloom), an open-source multilingual LLM pretrained by the [BigScience group](https://huggingface.co/bigscience). We instruction-tune BLOOM (176B) on OpenChatKit with each data source subsampled to 100k for one epoch, followed by three epochs over the combined OpenChatKit and Dolly 2.0. All of the code used to prepare the datasets and the scripts to run training and inference are open-sourced and freely available at [sambanova/bloomchat](https://github.com/sambanova/bloomchat/tree/main) ### Prompting Style Used For Training ``` <human>: {input1 that the user wants from the bot} <bot>: {response1}</s> <human>: {input2 that the user wants from the bot} <bot>: {response2}</s> ``` ### Hyperparameters **Instruction-tuned Training on OIG** - Hardware: SambaNova Reconfigurable Dataflow Unit (RDU) - Optimizer: AdamW - Grad accumulation: 1 - Epochs: 1 - Global Batch size: 128 - Batch tokens: 128 * 2048 = 262,144 tokens - Learning Rate: 1e-5 - Learning Rate Scheduler: Cosine Schedule with Warmup - Warmup Steps: 0 - End Learning Ratio: 0.1 - Weight decay: 0.1 **Instruction-tuned Training on Dolly 2.0 and Oasst1** - Hardware: SambaNova Reconfigurable Dataflow Unit (RDU) - Optimizer: AdamW - Grad accumulation: 1 - Epochs: 3 - Global Batch size: 128 - Batch tokens: 128 * 2048 = 262,144 tokens - Learning Rate: 1e-5 - Learning Rate Scheduler: Cosine Schedule with Warmup - Warmup Steps: 0 - End Learning Ratio: 0.1 - Weight decay: 0.1 </details> --- ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> Like all LLMs, BLOOMChat has certain limitations: - Hallucination: BLOOMChat may sometimes generate responses that contain plausible-sounding but factually incorrect or irrelevant information. - Code Switching: The model might unintentionally switch between languages or dialects within a single response, affecting the coherence and understandability of the output. - Repetition: BLOOMChat may produce repetitive phrases or sentences, leading to less engaging and informative responses. - Coding and Math: The model's performance in generating accurate code or solving complex mathematical problems may be limited. - Toxicity: BLOOMChat may inadvertently generate responses containing inappropriate or harmful content. ## Acknowledgment We would like to extend our gratitude to [Together](https://www.together.xyz/) for their insightful technical discussions on overall project planning, data processing, model training, human evaluation experiment design, open-source endeavors, and their contributions on data processing code on OpenChatKit, OASST1, and Dolly 2.0. We are grateful to the various researchers and open-source projects that have contributed to the development of BLOOMChat. We thank [BigScience](https://bigscience.huggingface.co/) for providing the [BLOOM](https://huggingface.co/bigscience/bloom) model, which served as the base for our instruction tuning. We also thank [LAION](https://laion.ai/) for their [OIG dataset](https://huggingface.co/datasets/laion/OIG), OpenAssistant Conversations Dataset ([OASST1](https://huggingface.co/datasets/OpenAssistant/oasst1)) and also thank [Databricks](https://www.databricks.com/) for providing [Dolly 2.0](https://huggingface.co/datasets/databricks/databricks-dolly-15k), to provide the dataset that we instruction tuned on. We appreciate [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness) and [BigScience](https://bigscience.huggingface.co/) for their essential benchmarking contributions, which is very helpful in evaluating BLOOMChat's performance. We appreciate the inspiration from the wave of various recent open-source chat models, including [OpenAssistant-30B](https://huggingface.co/OpenAssistant/oasst-sft-7-llama-30b-xor), [LLaMA-Adapter-V2-65B](https://github.com/ZrrSkywalker/LLaMA-Adapter/tree/main/llama_adapter_v2_chat65b), [Vicuna-13b](https://huggingface.co/lmsys/vicuna-13b-delta-v0), [Koala-13b](https://huggingface.co/TheBloke/koala-13B-HF), [OASST-Pythia-12b](https://huggingface.co/OpenAssistant/oasst-sft-1-pythia-12b), [Alpaca-13b](https://huggingface.co/anon8231489123/gpt4-x-alpaca-13b-native-4bit-128g), [ChatGLM-6b](https://github.com/THUDM/ChatGLM-6B), [FastChat-T5-3b](https://huggingface.co/lmsys/fastchat-t5-3b-v1.0), [Dolly-v2-12b](https://huggingface.co/databricks/dolly-v2-12b), [LLaMA-13b](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/), [StableLM-Tuned-Alpha-7b](https://huggingface.co/stabilityai/stablelm-tuned-alpha-7b), [RedPajama-INCITE-Chat-7B-v0.1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Chat-7B-v0.1), [RedPajama-INCITE-Chat-3B-v1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-7B-v0.1), [MPT-7B-Chat](https://huggingface.co/mosaicml/mpt-7b-chat) and so on. We look forward to witnessing the continued growth and success of open-source chat-based models. We highly appreciate the hard work and dedication of these researchers and organizations towards the advancement of the open-source community. Their contributions were invaluable in the development of BLOOMChat, and we hope that our model can contribute to further advancements in the field. ## Cite BLOOMChat ``` @software{bloomchat, title = {{BLOOMChat: a New Open Multilingual Chat LLM}}, author = {SambaNova Systems, Together Computer}, url = {https://huggingface.co/sambanovasystems/BLOOMChat-176B-v1} month = {5}, year = {2023}, version = {1.0}, } ```
HeshamMamdouh/arabart-finetune-sum-v5-fine-tuned
HeshamMamdouh
2023-07-07T21:56:34Z
61
0
transformers
[ "transformers", "tf", "mbart", "text2text-generation", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-07-07T21:53:30Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: arabart-finetune-sum-v5-fine-tuned results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # arabart-finetune-sum-v5-fine-tuned This model is a fine-tuned version of [abdalrahmanshahrour/AraBART-summ](https://huggingface.co/abdalrahmanshahrour/AraBART-summ) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.6392 - Validation Loss: 2.9100 - Train Lr: 2e-05 - Epoch: 6 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Lr | Epoch | |:----------:|:---------------:|:--------:|:-----:| | 3.1837 | 2.8523 | 2e-05 | 0 | | 3.0537 | 2.8248 | 2e-05 | 1 | | 2.9419 | 2.8509 | 2e-05 | 2 | | 2.8629 | 2.8580 | 2e-05 | 3 | | 2.8086 | 2.8829 | 2e-05 | 4 | | 2.7110 | 2.8474 | 2e-05 | 5 | | 2.6392 | 2.9100 | 2e-05 | 6 | ### Framework versions - Transformers 4.30.2 - TensorFlow 2.13.0 - Datasets 2.13.1 - Tokenizers 0.13.3
Calam1/t5-small-finetuned-wikisql
Calam1
2023-07-07T21:54:10Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:wikisql", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-07-07T14:01:12Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - wikisql model-index: - name: t5-small-finetuned-wikisql results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-wikisql This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wikisql dataset. It achieves the following results on the evaluation set: - Loss: 0.1246 - Rouge2 Precision: 0.8182 - Rouge2 Recall: 0.7261 - Rouge2 Fmeasure: 0.7623 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure | |:-------------:|:-----:|:-----:|:---------------:|:----------------:|:-------------:|:---------------:| | 0.1953 | 1.0 | 4049 | 0.1574 | 0.7939 | 0.7035 | 0.739 | | 0.1644 | 2.0 | 8098 | 0.1375 | 0.8082 | 0.7167 | 0.7527 | | 0.1517 | 3.0 | 12147 | 0.1296 | 0.8141 | 0.7223 | 0.7584 | | 0.146 | 4.0 | 16196 | 0.1256 | 0.817 | 0.7253 | 0.7613 | | 0.1413 | 5.0 | 20245 | 0.1246 | 0.8182 | 0.7261 | 0.7623 | ### Framework versions - Transformers 4.26.0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
andressrg/textual_inversion_meal_0_100
andressrg
2023-07-07T21:52:33Z
32
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "textual_inversion", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-07T21:40:21Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - textual_inversion inference: true --- # Textual inversion text2image fine-tuning - andressrg/textual_inversion_meal_0_100 These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
jordyvl/dit-tiny_tobacco3482_kd_CEKD_t2.0_a0.5
jordyvl
2023-07-07T21:47:34Z
162
0
transformers
[ "transformers", "pytorch", "beit", "image-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-07-07T21:35:24Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: dit-tiny_tobacco3482_kd_CEKD_t2.0_a0.5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # dit-tiny_tobacco3482_kd_CEKD_t2.0_a0.5 This model is a fine-tuned version of [microsoft/dit-base](https://huggingface.co/microsoft/dit-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.5976 - Accuracy: 0.18 - Brier Loss: 0.8781 - Nll: 6.8947 - F1 Micro: 0.18 - F1 Macro: 0.0306 - Ece: 0.2499 - Aurc: 0.8510 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 25 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:-------:|:--------:|:--------:|:------:|:------:| | No log | 0.96 | 3 | 3.8479 | 0.145 | 0.8999 | 10.1604 | 0.145 | 0.0253 | 0.2222 | 0.8467 | | No log | 1.96 | 6 | 3.8090 | 0.145 | 0.8946 | 10.5967 | 0.145 | 0.0253 | 0.2246 | 0.8470 | | No log | 2.96 | 9 | 3.7500 | 0.16 | 0.8866 | 8.6365 | 0.16 | 0.0406 | 0.2205 | 0.8486 | | No log | 3.96 | 12 | 3.7003 | 0.16 | 0.8805 | 6.5484 | 0.16 | 0.0327 | 0.2242 | 0.8816 | | No log | 4.96 | 15 | 3.6677 | 0.155 | 0.8776 | 6.7592 | 0.155 | 0.0271 | 0.2365 | 0.8919 | | No log | 5.96 | 18 | 3.6477 | 0.155 | 0.8770 | 7.2639 | 0.155 | 0.0278 | 0.2368 | 0.8961 | | No log | 6.96 | 21 | 3.6339 | 0.18 | 0.8774 | 7.3546 | 0.18 | 0.0313 | 0.2486 | 0.8556 | | No log | 7.96 | 24 | 3.6240 | 0.18 | 0.8781 | 7.0685 | 0.18 | 0.0308 | 0.2654 | 0.8528 | | No log | 8.96 | 27 | 3.6163 | 0.18 | 0.8784 | 7.0041 | 0.18 | 0.0306 | 0.2561 | 0.8532 | | No log | 9.96 | 30 | 3.6114 | 0.18 | 0.8787 | 6.9904 | 0.18 | 0.0306 | 0.2584 | 0.8537 | | No log | 10.96 | 33 | 3.6078 | 0.18 | 0.8788 | 6.9806 | 0.18 | 0.0306 | 0.2594 | 0.8538 | | No log | 11.96 | 36 | 3.6052 | 0.18 | 0.8789 | 6.9768 | 0.18 | 0.0306 | 0.2596 | 0.8537 | | No log | 12.96 | 39 | 3.6034 | 0.18 | 0.8788 | 6.9716 | 0.18 | 0.0306 | 0.2507 | 0.8532 | | No log | 13.96 | 42 | 3.6018 | 0.18 | 0.8786 | 6.9683 | 0.18 | 0.0306 | 0.2548 | 0.8527 | | No log | 14.96 | 45 | 3.6005 | 0.18 | 0.8786 | 6.9040 | 0.18 | 0.0306 | 0.2597 | 0.8524 | | No log | 15.96 | 48 | 3.5995 | 0.18 | 0.8784 | 6.8978 | 0.18 | 0.0306 | 0.2685 | 0.8518 | | No log | 16.96 | 51 | 3.5989 | 0.18 | 0.8784 | 6.8972 | 0.18 | 0.0306 | 0.2641 | 0.8515 | | No log | 17.96 | 54 | 3.5989 | 0.18 | 0.8784 | 6.8961 | 0.18 | 0.0306 | 0.2550 | 0.8513 | | No log | 18.96 | 57 | 3.5988 | 0.18 | 0.8784 | 6.8968 | 0.18 | 0.0306 | 0.2505 | 0.8510 | | No log | 19.96 | 60 | 3.5982 | 0.18 | 0.8782 | 6.8956 | 0.18 | 0.0306 | 0.2478 | 0.8511 | | No log | 20.96 | 63 | 3.5980 | 0.18 | 0.8782 | 6.8954 | 0.18 | 0.0306 | 0.2456 | 0.8507 | | No log | 21.96 | 66 | 3.5978 | 0.18 | 0.8782 | 6.8951 | 0.18 | 0.0306 | 0.2499 | 0.8511 | | No log | 22.96 | 69 | 3.5976 | 0.18 | 0.8781 | 6.8949 | 0.18 | 0.0306 | 0.2499 | 0.8510 | | No log | 23.96 | 72 | 3.5976 | 0.18 | 0.8781 | 6.8949 | 0.18 | 0.0306 | 0.2499 | 0.8510 | | No log | 24.96 | 75 | 3.5976 | 0.18 | 0.8781 | 6.8947 | 0.18 | 0.0306 | 0.2499 | 0.8510 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1.post200 - Datasets 2.9.0 - Tokenizers 0.13.2
Manab/donut-base-my_model_rapido_2_new_check_4
Manab
2023-07-07T21:29:12Z
45
0
transformers
[ "transformers", "pytorch", "tensorboard", "vision-encoder-decoder", "image-text-to-text", "generated_from_trainer", "dataset:imagefolder", "base_model:naver-clova-ix/donut-base", "base_model:finetune:naver-clova-ix/donut-base", "license:mit", "endpoints_compatible", "region:us" ]
image-text-to-text
2023-07-07T21:22:11Z
--- license: mit base_model: naver-clova-ix/donut-base tags: - generated_from_trainer datasets: - imagefolder model-index: - name: donut-base-my_model_rapido_2_new_check_4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # donut-base-my_model_rapido_2_new_check_4 This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.8758 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 7.1017 | 0.69 | 50 | 1.7221 | | 1.4162 | 1.39 | 100 | 0.8758 | ### Framework versions - Transformers 4.31.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
openlm-research/open_llama_7b_v2_easylm
openlm-research
2023-07-07T21:26:52Z
0
4
null
[ "dataset:tiiuae/falcon-refinedweb", "dataset:bigcode/starcoderdata", "dataset:togethercomputer/RedPajama-Data-1T", "arxiv:2302.13971", "license:apache-2.0", "region:us" ]
null
2023-07-07T19:52:09Z
--- license: apache-2.0 datasets: - tiiuae/falcon-refinedweb - bigcode/starcoderdata - togethercomputer/RedPajama-Data-1T --- # OpenLLaMA: An Open Reproduction of LLaMA **TL;DR**: we are releasing our public preview of OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA. We are releasing a series of 3B, 7B and 13B models trained on different data mixtures. Our model weights can serve as the drop in replacement of LLaMA in existing implementations. In this repo, we present a permissively licensed open source reproduction of Meta AI's [LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/) large language model. We are releasing a series of 3B, 7B and 13B models trained on 1T tokens. We provide PyTorch and JAX weights of pre-trained OpenLLaMA models, as well as evaluation results and comparison against the original LLaMA models. The v2 model is better than the old v1 model trained on a different data mixture. Please see the [project homepage of OpenLLaMA](https://github.com/openlm-research/open_llama) for more details. ## Weights Release, License and Usage We release the weights in two formats: an EasyLM format to be use with our [EasyLM framework](https://github.com/young-geng/EasyLM), and a PyTorch format to be used with the [Hugging Face transformers](https://huggingface.co/docs/transformers/index) library. Both our training framework EasyLM and the checkpoint weights are licensed permissively under the Apache 2.0 license. ### Loading the Weights with Hugging Face Transformers Preview checkpoints can be directly loaded from Hugging Face Hub. **Please note that it is advised to avoid using the Hugging Face fast tokenizer for now, as we’ve observed that** [**the auto-converted fast tokenizer sometimes gives incorrect tokenizations**](https://github.com/huggingface/transformers/issues/24233)**.** This can be achieved by directly using the `LlamaTokenizer` class, or passing in the `use_fast=False` option for the `AutoTokenizer` class. See the following example for usage. ```python import torch from transformers import LlamaTokenizer, LlamaForCausalLM ## v2 models model_path = 'openlm-research/open_llama_7b_v2' ## v1 models # model_path = 'openlm-research/open_llama_3b' # model_path = 'openlm-research/open_llama_7b' # model_path = 'openlm-research/open_llama_13b' tokenizer = LlamaTokenizer.from_pretrained(model_path) model = LlamaForCausalLM.from_pretrained( model_path, torch_dtype=torch.float16, device_map='auto', ) prompt = 'Q: What is the largest animal?\nA:' input_ids = tokenizer(prompt, return_tensors="pt").input_ids generation_output = model.generate( input_ids=input_ids, max_new_tokens=32 ) print(tokenizer.decode(generation_output[0])) ``` For more advanced usage, please follow the [transformers LLaMA documentation](https://huggingface.co/docs/transformers/main/model_doc/llama). ### Evaluating with LM-Eval-Harness The model can be evaluated with [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness). However, due to the aforementioned tokenizer issue, we need to avoid using the fast tokenizer to obtain the correct results. This can be achieved by passing in `use_fast=False` to [this part of lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness/blob/4b701e228768052cfae9043dca13e82052ca5eea/lm_eval/models/huggingface.py#LL313C9-L316C10), as shown in the example below: ```python tokenizer = self.AUTO_TOKENIZER_CLASS.from_pretrained( pretrained if tokenizer is None else tokenizer, revision=revision + ("/" + subfolder if subfolder is not None else ""), use_fast=False ) ``` ### Loading the Weights with EasyLM For using the weights in our EasyLM framework, please refer to the [LLaMA documentation of EasyLM](https://github.com/young-geng/EasyLM/blob/main/docs/llama.md). Note that unlike the original LLaMA model, our OpenLLaMA tokenizer and weights are trained completely from scratch so it is no longer needed to obtain the original LLaMA tokenizer and weights. ## Dataset and Training The v1 models are trained on the [RedPajama dataset](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T). The v2 models are trained on a mixture of the [Falcon refined-web dataset](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), the [StarCoder dataset](https://huggingface.co/datasets/bigcode/starcoderdata) and the wikipedia, arxiv, book and stackexchange part of the [RedPajama dataset](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T). We follow the exactly same preprocessing steps and training hyperparameters as the original LLaMA paper, including model architecture, context length, training steps, learning rate schedule, and optimizer. The only difference between our setting and the original one is the dataset used: OpenLLaMA employs open datasets rather than the one utilized by the original LLaMA. We train the models on cloud TPU-v4s using [EasyLM](https://github.com/young-geng/EasyLM), a JAX based training pipeline we developed for training and fine-tuning large language models. We employ a combination of normal data parallelism and [fully sharded data parallelism (also know as ZeRO stage 3)](https://engineering.fb.com/2021/07/15/open-source/fsdp/) to balance the training throughput and memory usage. Overall we reach a throughput of over 2200 tokens / second / TPU-v4 chip for our 7B model. ## Evaluation We evaluated OpenLLaMA on a wide range of tasks using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). The LLaMA results are generated by running the original LLaMA model on the same evaluation metrics. We note that our results for the LLaMA model differ slightly from the original LLaMA paper, which we believe is a result of different evaluation protocols. Similar differences have been reported in [this issue of lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/issues/443). Additionally, we present the results of GPT-J, a 6B parameter model trained on the [Pile](https://pile.eleuther.ai/) dataset by [EleutherAI](https://www.eleuther.ai/). The original LLaMA model was trained for 1 trillion tokens and GPT-J was trained for 500 billion tokens. We present the results in the table below. OpenLLaMA exhibits comparable performance to the original LLaMA and GPT-J across a majority of tasks, and outperforms them in some tasks. | **Task/Metric** | GPT-J 6B | LLaMA 7B | LLaMA 13B | OpenLLaMA 7Bv2 | OpenLLaMA 3B | OpenLLaMA 7B | OpenLLaMA 13B | | ---------------------- | -------- | -------- | --------- | -------------- | ------------ | ------------ | ------------- | | anli_r1/acc | 0.32 | 0.35 | 0.35 | 0.34 | 0.33 | 0.33 | 0.33 | | anli_r2/acc | 0.34 | 0.34 | 0.36 | 0.35 | 0.32 | 0.36 | 0.33 | | anli_r3/acc | 0.35 | 0.37 | 0.39 | 0.39 | 0.35 | 0.38 | 0.40 | | arc_challenge/acc | 0.34 | 0.39 | 0.44 | 0.39 | 0.34 | 0.37 | 0.41 | | arc_challenge/acc_norm | 0.37 | 0.41 | 0.44 | 0.41 | 0.37 | 0.38 | 0.44 | | arc_easy/acc | 0.67 | 0.68 | 0.75 | 0.73 | 0.69 | 0.72 | 0.75 | | arc_easy/acc_norm | 0.62 | 0.52 | 0.59 | 0.70 | 0.65 | 0.68 | 0.70 | | boolq/acc | 0.66 | 0.75 | 0.71 | 0.72 | 0.68 | 0.71 | 0.75 | | hellaswag/acc | 0.50 | 0.56 | 0.59 | 0.56 | 0.49 | 0.53 | 0.56 | | hellaswag/acc_norm | 0.66 | 0.73 | 0.76 | 0.75 | 0.67 | 0.72 | 0.76 | | openbookqa/acc | 0.29 | 0.29 | 0.31 | 0.30 | 0.27 | 0.30 | 0.31 | | openbookqa/acc_norm | 0.38 | 0.41 | 0.42 | 0.41 | 0.40 | 0.40 | 0.43 | | piqa/acc | 0.75 | 0.78 | 0.79 | 0.79 | 0.75 | 0.76 | 0.77 | | piqa/acc_norm | 0.76 | 0.78 | 0.79 | 0.80 | 0.76 | 0.77 | 0.79 | | record/em | 0.88 | 0.91 | 0.92 | 0.89 | 0.88 | 0.89 | 0.91 | | record/f1 | 0.89 | 0.91 | 0.92 | 0.89 | 0.89 | 0.90 | 0.91 | | rte/acc | 0.54 | 0.56 | 0.69 | 0.57 | 0.58 | 0.60 | 0.64 | | truthfulqa_mc/mc1 | 0.20 | 0.21 | 0.25 | 0.23 | 0.22 | 0.23 | 0.25 | | truthfulqa_mc/mc2 | 0.36 | 0.34 | 0.40 | 0.35 | 0.35 | 0.35 | 0.38 | | wic/acc | 0.50 | 0.50 | 0.50 | 0.50 | 0.48 | 0.51 | 0.47 | | winogrande/acc | 0.64 | 0.68 | 0.70 | 0.66 | 0.62 | 0.67 | 0.70 | | Average | 0.52 | 0.55 | 0.57 | 0.56 | 0.53 | 0.55 | 0.57 | We removed the task CB and WSC from our benchmark, as our model performs suspiciously high on these two tasks. We hypothesize that there could be a benchmark data contamination in the training set. ## Contact We would love to get feedback from the community. If you have any questions, please open an issue or contact us. OpenLLaMA is developed by: [Xinyang Geng](https://young-geng.xyz/)* and [Hao Liu](https://www.haoliu.site/)* from Berkeley AI Research. *Equal Contribution ## Acknowledgment We thank the [Google TPU Research Cloud](https://sites.research.google/trc/about/) program for providing part of the computation resources. We’d like to specially thank Jonathan Caton from TPU Research Cloud for helping us organizing compute resources, Rafi Witten from the Google Cloud team and James Bradbury from the Google JAX team for helping us optimizing our training throughput. We’d also want to thank Charlie Snell, Gautier Izacard, Eric Wallace, Lianmin Zheng and our user community for the discussions and feedback. The OpenLLaMA 13B v1 model is trained in collaboration with [Stability AI](https://stability.ai/), and we thank Stability AI for providing the computation resources. We’d like to especially thank David Ha and Shivanshu Purohit for the coordinating the logistics and providing engineering support. ## Reference If you found OpenLLaMA useful in your research or applications, please cite using the following BibTeX: ``` @software{openlm2023openllama, author = {Geng, Xinyang and Liu, Hao}, title = {OpenLLaMA: An Open Reproduction of LLaMA}, month = May, year = 2023, url = {https://github.com/openlm-research/open_llama} } ``` ``` @software{together2023redpajama, author = {Together Computer}, title = {RedPajama-Data: An Open Source Recipe to Reproduce LLaMA training dataset}, month = April, year = 2023, url = {https://github.com/togethercomputer/RedPajama-Data} } ``` ``` @article{touvron2023llama, title={Llama: Open and efficient foundation language models}, author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and others}, journal={arXiv preprint arXiv:2302.13971}, year={2023} } ```
openlm-research/open_llama_7b_v2
openlm-research
2023-07-07T21:26:13Z
3,256
116
transformers
[ "transformers", "pytorch", "llama", "text-generation", "dataset:tiiuae/falcon-refinedweb", "dataset:bigcode/starcoderdata", "dataset:togethercomputer/RedPajama-Data-1T", "arxiv:2302.13971", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-06T08:23:04Z
--- license: apache-2.0 datasets: - tiiuae/falcon-refinedweb - bigcode/starcoderdata - togethercomputer/RedPajama-Data-1T library_name: transformers --- # OpenLLaMA: An Open Reproduction of LLaMA **TL;DR**: we are releasing our public preview of OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA. We are releasing a series of 3B, 7B and 13B models trained on different data mixtures. Our model weights can serve as the drop in replacement of LLaMA in existing implementations. In this repo, we present a permissively licensed open source reproduction of Meta AI's [LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/) large language model. We are releasing a series of 3B, 7B and 13B models trained on 1T tokens. We provide PyTorch and JAX weights of pre-trained OpenLLaMA models, as well as evaluation results and comparison against the original LLaMA models. The v2 model is better than the old v1 model trained on a different data mixture. Please see the [project homepage of OpenLLaMA](https://github.com/openlm-research/open_llama) for more details. ## Weights Release, License and Usage We release the weights in two formats: an EasyLM format to be use with our [EasyLM framework](https://github.com/young-geng/EasyLM), and a PyTorch format to be used with the [Hugging Face transformers](https://huggingface.co/docs/transformers/index) library. Both our training framework EasyLM and the checkpoint weights are licensed permissively under the Apache 2.0 license. ### Loading the Weights with Hugging Face Transformers Preview checkpoints can be directly loaded from Hugging Face Hub. **Please note that it is advised to avoid using the Hugging Face fast tokenizer for now, as we’ve observed that** [**the auto-converted fast tokenizer sometimes gives incorrect tokenizations**](https://github.com/huggingface/transformers/issues/24233)**.** This can be achieved by directly using the `LlamaTokenizer` class, or passing in the `use_fast=False` option for the `AutoTokenizer` class. See the following example for usage. ```python import torch from transformers import LlamaTokenizer, LlamaForCausalLM ## v2 models model_path = 'openlm-research/open_llama_7b_v2' ## v1 models # model_path = 'openlm-research/open_llama_3b' # model_path = 'openlm-research/open_llama_7b' # model_path = 'openlm-research/open_llama_13b' tokenizer = LlamaTokenizer.from_pretrained(model_path) model = LlamaForCausalLM.from_pretrained( model_path, torch_dtype=torch.float16, device_map='auto', ) prompt = 'Q: What is the largest animal?\nA:' input_ids = tokenizer(prompt, return_tensors="pt").input_ids generation_output = model.generate( input_ids=input_ids, max_new_tokens=32 ) print(tokenizer.decode(generation_output[0])) ``` For more advanced usage, please follow the [transformers LLaMA documentation](https://huggingface.co/docs/transformers/main/model_doc/llama). ### Evaluating with LM-Eval-Harness The model can be evaluated with [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness). However, due to the aforementioned tokenizer issue, we need to avoid using the fast tokenizer to obtain the correct results. This can be achieved by passing in `use_fast=False` to [this part of lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness/blob/4b701e228768052cfae9043dca13e82052ca5eea/lm_eval/models/huggingface.py#LL313C9-L316C10), as shown in the example below: ```python tokenizer = self.AUTO_TOKENIZER_CLASS.from_pretrained( pretrained if tokenizer is None else tokenizer, revision=revision + ("/" + subfolder if subfolder is not None else ""), use_fast=False ) ``` ### Loading the Weights with EasyLM For using the weights in our EasyLM framework, please refer to the [LLaMA documentation of EasyLM](https://github.com/young-geng/EasyLM/blob/main/docs/llama.md). Note that unlike the original LLaMA model, our OpenLLaMA tokenizer and weights are trained completely from scratch so it is no longer needed to obtain the original LLaMA tokenizer and weights. ## Dataset and Training The v1 models are trained on the [RedPajama dataset](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T). The v2 models are trained on a mixture of the [Falcon refined-web dataset](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), the [StarCoder dataset](https://huggingface.co/datasets/bigcode/starcoderdata) and the wikipedia, arxiv, book and stackexchange part of the [RedPajama dataset](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T). We follow the exactly same preprocessing steps and training hyperparameters as the original LLaMA paper, including model architecture, context length, training steps, learning rate schedule, and optimizer. The only difference between our setting and the original one is the dataset used: OpenLLaMA employs open datasets rather than the one utilized by the original LLaMA. We train the models on cloud TPU-v4s using [EasyLM](https://github.com/young-geng/EasyLM), a JAX based training pipeline we developed for training and fine-tuning large language models. We employ a combination of normal data parallelism and [fully sharded data parallelism (also know as ZeRO stage 3)](https://engineering.fb.com/2021/07/15/open-source/fsdp/) to balance the training throughput and memory usage. Overall we reach a throughput of over 2200 tokens / second / TPU-v4 chip for our 7B model. ## Evaluation We evaluated OpenLLaMA on a wide range of tasks using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). The LLaMA results are generated by running the original LLaMA model on the same evaluation metrics. We note that our results for the LLaMA model differ slightly from the original LLaMA paper, which we believe is a result of different evaluation protocols. Similar differences have been reported in [this issue of lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/issues/443). Additionally, we present the results of GPT-J, a 6B parameter model trained on the [Pile](https://pile.eleuther.ai/) dataset by [EleutherAI](https://www.eleuther.ai/). The original LLaMA model was trained for 1 trillion tokens and GPT-J was trained for 500 billion tokens. We present the results in the table below. OpenLLaMA exhibits comparable performance to the original LLaMA and GPT-J across a majority of tasks, and outperforms them in some tasks. | **Task/Metric** | GPT-J 6B | LLaMA 7B | LLaMA 13B | OpenLLaMA 7Bv2 | OpenLLaMA 3B | OpenLLaMA 7B | OpenLLaMA 13B | | ---------------------- | -------- | -------- | --------- | -------------- | ------------ | ------------ | ------------- | | anli_r1/acc | 0.32 | 0.35 | 0.35 | 0.34 | 0.33 | 0.33 | 0.33 | | anli_r2/acc | 0.34 | 0.34 | 0.36 | 0.35 | 0.32 | 0.36 | 0.33 | | anli_r3/acc | 0.35 | 0.37 | 0.39 | 0.39 | 0.35 | 0.38 | 0.40 | | arc_challenge/acc | 0.34 | 0.39 | 0.44 | 0.39 | 0.34 | 0.37 | 0.41 | | arc_challenge/acc_norm | 0.37 | 0.41 | 0.44 | 0.41 | 0.37 | 0.38 | 0.44 | | arc_easy/acc | 0.67 | 0.68 | 0.75 | 0.73 | 0.69 | 0.72 | 0.75 | | arc_easy/acc_norm | 0.62 | 0.52 | 0.59 | 0.70 | 0.65 | 0.68 | 0.70 | | boolq/acc | 0.66 | 0.75 | 0.71 | 0.72 | 0.68 | 0.71 | 0.75 | | hellaswag/acc | 0.50 | 0.56 | 0.59 | 0.56 | 0.49 | 0.53 | 0.56 | | hellaswag/acc_norm | 0.66 | 0.73 | 0.76 | 0.75 | 0.67 | 0.72 | 0.76 | | openbookqa/acc | 0.29 | 0.29 | 0.31 | 0.30 | 0.27 | 0.30 | 0.31 | | openbookqa/acc_norm | 0.38 | 0.41 | 0.42 | 0.41 | 0.40 | 0.40 | 0.43 | | piqa/acc | 0.75 | 0.78 | 0.79 | 0.79 | 0.75 | 0.76 | 0.77 | | piqa/acc_norm | 0.76 | 0.78 | 0.79 | 0.80 | 0.76 | 0.77 | 0.79 | | record/em | 0.88 | 0.91 | 0.92 | 0.89 | 0.88 | 0.89 | 0.91 | | record/f1 | 0.89 | 0.91 | 0.92 | 0.89 | 0.89 | 0.90 | 0.91 | | rte/acc | 0.54 | 0.56 | 0.69 | 0.57 | 0.58 | 0.60 | 0.64 | | truthfulqa_mc/mc1 | 0.20 | 0.21 | 0.25 | 0.23 | 0.22 | 0.23 | 0.25 | | truthfulqa_mc/mc2 | 0.36 | 0.34 | 0.40 | 0.35 | 0.35 | 0.35 | 0.38 | | wic/acc | 0.50 | 0.50 | 0.50 | 0.50 | 0.48 | 0.51 | 0.47 | | winogrande/acc | 0.64 | 0.68 | 0.70 | 0.66 | 0.62 | 0.67 | 0.70 | | Average | 0.52 | 0.55 | 0.57 | 0.56 | 0.53 | 0.55 | 0.57 | We removed the task CB and WSC from our benchmark, as our model performs suspiciously high on these two tasks. We hypothesize that there could be a benchmark data contamination in the training set. ## Contact We would love to get feedback from the community. If you have any questions, please open an issue or contact us. OpenLLaMA is developed by: [Xinyang Geng](https://young-geng.xyz/)* and [Hao Liu](https://www.haoliu.site/)* from Berkeley AI Research. *Equal Contribution ## Acknowledgment We thank the [Google TPU Research Cloud](https://sites.research.google/trc/about/) program for providing part of the computation resources. We’d like to specially thank Jonathan Caton from TPU Research Cloud for helping us organizing compute resources, Rafi Witten from the Google Cloud team and James Bradbury from the Google JAX team for helping us optimizing our training throughput. We’d also want to thank Charlie Snell, Gautier Izacard, Eric Wallace, Lianmin Zheng and our user community for the discussions and feedback. The OpenLLaMA 13B v1 model is trained in collaboration with [Stability AI](https://stability.ai/), and we thank Stability AI for providing the computation resources. We’d like to especially thank David Ha and Shivanshu Purohit for the coordinating the logistics and providing engineering support. ## Reference If you found OpenLLaMA useful in your research or applications, please cite using the following BibTeX: ``` @software{openlm2023openllama, author = {Geng, Xinyang and Liu, Hao}, title = {OpenLLaMA: An Open Reproduction of LLaMA}, month = May, year = 2023, url = {https://github.com/openlm-research/open_llama} } ``` ``` @software{together2023redpajama, author = {Together Computer}, title = {RedPajama-Data: An Open Source Recipe to Reproduce LLaMA training dataset}, month = April, year = 2023, url = {https://github.com/togethercomputer/RedPajama-Data} } ``` ``` @article{touvron2023llama, title={Llama: Open and efficient foundation language models}, author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and others}, journal={arXiv preprint arXiv:2302.13971}, year={2023} } ```
spacemanidol/flan-t5-base-5-5-xsum
spacemanidol
2023-07-07T21:25:32Z
108
0
transformers
[ "transformers", "pytorch", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "dataset:xsum", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-02-27T15:39:27Z
--- tags: - generated_from_trainer datasets: - xsum metrics: - rouge model-index: - name: base-5-5 results: - task: name: Summarization type: summarization dataset: name: xsum type: xsum config: default split: validation args: default metrics: - name: Rouge1 type: rouge value: 38.7969 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # base-5-5 This model is a fine-tuned version of [x/base-5-5/](https://huggingface.co/x/base-5-5/) on the xsum dataset. It achieves the following results on the evaluation set: - Loss: 1.7414 - Rouge1: 38.7969 - Rouge2: 15.7213 - Rougel: 31.0769 - Rougelsum: 31.0667 - Gen Len: 26.9223 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.27.0.dev0 - Pytorch 1.12.1+cu113 - Datasets 2.10.0 - Tokenizers 0.13.2
mwz/UrduParaphraseBERT
mwz
2023-07-07T21:21:02Z
188
4
transformers
[ "transformers", "pytorch", "safetensors", "encoder-decoder", "text2text-generation", "paraphrase ", "ur", "dataset:mwz/ur_para", "license:mit", "autotrain_compatible", "region:us" ]
text2text-generation
2023-06-08T18:15:47Z
--- inference: false license: mit datasets: - mwz/ur_para language: - ur tags: - 'paraphrase ' --- # Urdu Paraphrasing Model This repository contains a trained Urdu paraphrasing model based on the BERT-based encoder-decoder architecture. The model has been fine-tuned on the Urdu Paraphrase Dataset and can generate paraphrases for given input sentences in Urdu. ## Model Description The model is built using the Hugging Face Transformers library and is trained on the BERT-base-uncased model. It employs an encoder-decoder architecture where the BERT model serves as the encoder, and another BERT model is used as the decoder. The model is trained to generate paraphrases by reconstructing the input sentences. ## Usage To use the trained model for paraphrasing Urdu sentences, you can follow the steps below: 1. Install the required dependencies by running the following command: 2. Load the trained model using the Hugging Face Transformers library: ```python from transformers import EncoderDecoderModel, BertTokenizer # Load the model and tokenizer model = EncoderDecoderModel.from_pretrained("mwz/UrduParaphraseBERT") tokenizer = BertTokenizer.from_pretrained("mwz/UrduParaphraseBERT") def paraphrase_urdu_sentence(sentence): input_ids = tokenizer.encode(sentence, padding="longest", truncation=True, max_length=512, return_tensors="pt") generated_ids = model.generate(input_ids=input_ids, max_length=128, num_beams=4, no_repeat_ngram_size=2) paraphrase = tokenizer.decode(generated_ids[0], skip_special_tokens=True) return paraphrase sentence = "ایک مثالی روشنی کا مشہور نقطہ آبادی چھوٹی چھوٹی سڑکوں میں اپنے آپ کو خوشگوار کرسکتی ہے۔" paraphrased_sentence = paraphrase_urdu_sentence(sentence) print(paraphrased_sentence) ```
Manab/donut-base-my_model_rapido_2_new_check_3
Manab
2023-07-07T21:17:49Z
46
0
transformers
[ "transformers", "pytorch", "tensorboard", "vision-encoder-decoder", "image-text-to-text", "generated_from_trainer", "dataset:imagefolder", "base_model:Manab/donut-base-my_model_rapido_2_new_check_2", "base_model:finetune:Manab/donut-base-my_model_rapido_2_new_check_2", "license:mit", "endpoints_compatible", "region:us" ]
image-text-to-text
2023-07-07T21:05:53Z
--- license: mit base_model: Manab/donut-base-my_model_rapido_2_new_check_2 tags: - generated_from_trainer datasets: - imagefolder model-index: - name: donut-base-my_model_rapido_2_new_check_3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # donut-base-my_model_rapido_2_new_check_3 This model is a fine-tuned version of [Manab/donut-base-my_model_rapido_2_new_check_2](https://huggingface.co/Manab/donut-base-my_model_rapido_2_new_check_2) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.3896 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.8694 | 0.69 | 50 | 2.0758 | | 2.2421 | 1.39 | 100 | 1.7321 | | 1.6972 | 2.08 | 150 | 1.4280 | | 1.5866 | 2.78 | 200 | 1.3896 | ### Framework versions - Transformers 4.31.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
guyson/Bluemoon_30b_safetensors_only
guyson
2023-07-07T21:01:18Z
8
0
transformers
[ "transformers", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-07-07T17:58:46Z
For my own use, All credit and original model goes to: https://huggingface.co/reeducator/bluemoonrp-30b/tree/main
dracero/ppo-LunarLander-v2
dracero
2023-07-07T20:56:44Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-07T20:54:42Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 242.61 +/- 16.03 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Diego's code ```python import gymnasium as gym from stable_baselines3.common.vec_env import DummyVecEnv from stable_baselines3.common.env_util import make_vec_env from huggingface_sb3 import package_to_hub ## TODO: Define a repo_id ## repo_id is the id of the model repository from the Hugging Face Hub (repo_id = {organization}/{repo_name} for instance ThomasSimonini/ppo-LunarLander-v2 repo_id = # TODO: Define the name of the environment env_id = # Create the evaluation env and set the render_mode="rgb_array" eval_env = DummyVecEnv([lambda: Monitor(gym.make(env_id, render_mode="rgb_array"))]) # TODO: Define the model architecture we used model_architecture = "" ## TODO: Define the commit message commit_message = "" # method save, evaluate, generate a model card and record a replay video of your agent before pushing the repo to the hub package_to_hub(model=model, # Our trained model model_name=model_name, # The name of our trained model model_architecture=model_architecture, # The model architecture we used: in our case PPO env_id=env_id, # Name of the environment eval_env=eval_env, # Evaluation Environment repo_id=repo_id, # id of the model repository from the Hugging Face Hub (repo_id = {organization}/{repo_name} for instance ThomasSimonini/ppo-LunarLander-v2 commit_message=commit_message) ... ```
ALM-AHME/beit-large-patch16-224-finetuned-Lesion-Classification-HAM10000-AH-60-20-20
ALM-AHME
2023-07-07T20:39:27Z
5
1
transformers
[ "transformers", "pytorch", "tensorboard", "beit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-07-07T18:00:37Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: beit-large-patch16-224-finetuned-Lesion-Classification-HAM10000-AH-60-20-20 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: Augmented-Final split: train args: Augmented-Final metrics: - name: Accuracy type: accuracy value: 0.9907502569373073 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # beit-large-patch16-224-finetuned-Lesion-Classification-HAM10000-AH-60-20-20 This model is a fine-tuned version of [microsoft/beit-large-patch16-224](https://huggingface.co/microsoft/beit-large-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0434 - Accuracy: 0.9908 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.9 - num_epochs: 12 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.9688 | 1.0 | 122 | 1.8425 | 0.2775 | | 1.4822 | 2.0 | 244 | 1.3833 | 0.5457 | | 1.1239 | 3.0 | 366 | 0.9321 | 0.6680 | | 0.8686 | 4.0 | 488 | 0.6691 | 0.7698 | | 0.5234 | 5.0 | 610 | 0.4872 | 0.8335 | | 0.5246 | 6.0 | 732 | 0.3586 | 0.8736 | | 0.3691 | 7.0 | 854 | 0.3134 | 0.8993 | | 0.4708 | 8.0 | 976 | 0.2069 | 0.9394 | | 0.1694 | 9.0 | 1098 | 0.1832 | 0.9414 | | 0.2749 | 10.0 | 1220 | 0.1198 | 0.9640 | | 0.1777 | 11.0 | 1342 | 0.0845 | 0.9733 | | 0.1529 | 12.0 | 1464 | 0.0434 | 0.9908 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
Anis12/Cartpole-v1
Anis12
2023-07-07T20:28:01Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-07-07T20:27:52Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Cartpole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 453.00 +/- 141.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
imtiaz114/bert-finetuned-ner
imtiaz114
2023-07-07T20:14:46Z
62
0
transformers
[ "transformers", "tf", "bert", "token-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-07-07T06:36:59Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: imtiaz114/bert-finetuned-ner results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # imtiaz114/bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5417 - Validation Loss: 0.4322 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 231, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.5417 | 0.4322 | 0 | ### Framework versions - Transformers 4.30.2 - TensorFlow 2.12.0 - Datasets 2.13.1 - Tokenizers 0.13.3
alex2awesome/newsdiscourse-model-large
alex2awesome
2023-07-07T20:13:49Z
1
0
transformers
[ "transformers", "pytorch", "roberta", "generated_from_trainer", "license:mit", "endpoints_compatible", "region:us" ]
null
2023-07-07T19:37:21Z
--- license: mit tags: - generated_from_trainer metrics: - f1 model-index: - name: newsdiscourse-model-large results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # newsdiscourse-model-large This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.5899 - F1: 0.1975 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 0.14 | 100 | 1.9895 | 0.0487 | | No log | 0.28 | 200 | 2.0130 | 0.0512 | | No log | 0.43 | 300 | 1.9527 | 0.0512 | | No log | 0.57 | 400 | 1.9605 | 0.0487 | | 2.0539 | 0.71 | 500 | 1.9854 | 0.0618 | | 2.0539 | 0.85 | 600 | 1.7978 | 0.1242 | | 2.0539 | 1.0 | 700 | 1.7291 | 0.1373 | | 2.0539 | 1.14 | 800 | 1.9082 | 0.0487 | | 2.0539 | 1.28 | 900 | 1.9300 | 0.0487 | | 1.9096 | 1.42 | 1000 | 1.7186 | 0.1414 | | 1.9096 | 1.57 | 1100 | 1.7304 | 0.1399 | | 1.9096 | 1.71 | 1200 | 1.7281 | 0.1363 | | 1.9096 | 1.85 | 1300 | 1.8452 | 0.0576 | | 1.9096 | 1.99 | 1400 | 1.7180 | 0.1519 | | 1.7842 | 2.14 | 1500 | 1.7450 | 0.1525 | | 1.7842 | 2.28 | 1600 | 1.7752 | 0.1344 | | 1.7842 | 2.42 | 1700 | 1.7548 | 0.1506 | | 1.7842 | 2.56 | 1800 | 1.7185 | 0.1536 | | 1.7842 | 2.71 | 1900 | 1.6870 | 0.1536 | | 1.7227 | 2.85 | 2000 | 1.7336 | 0.1536 | | 1.7227 | 2.99 | 2100 | 1.7217 | 0.1490 | | 1.7227 | 3.13 | 2200 | 1.7213 | 0.1482 | | 1.7227 | 3.28 | 2300 | 1.7482 | 0.1435 | | 1.7227 | 3.42 | 2400 | 1.7559 | 0.1456 | | 1.7441 | 3.56 | 2500 | 1.7324 | 0.1406 | | 1.7441 | 3.7 | 2600 | 1.6977 | 0.1484 | | 1.7441 | 3.85 | 2700 | 1.6276 | 0.1839 | | 1.7441 | 3.99 | 2800 | 1.6109 | 0.1876 | | 1.7441 | 4.13 | 2900 | 1.6359 | 0.2181 | | 1.6515 | 4.27 | 3000 | 1.6463 | 0.1792 | | 1.6515 | 4.42 | 3100 | 1.6397 | 0.1828 | | 1.6515 | 4.56 | 3200 | 1.6189 | 0.1837 | | 1.6515 | 4.7 | 3300 | 1.6096 | 0.1875 | | 1.6515 | 4.84 | 3400 | 1.5904 | 0.1925 | | 1.6003 | 4.99 | 3500 | 1.5899 | 0.1975 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu117 - Datasets 2.13.1 - Tokenizers 0.13.3
tyavika/LR-1E4-Bert-QA-Pytorch-FULL
tyavika
2023-07-07T20:12:56Z
116
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-07-07T04:42:17Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: LR-1E4-Bert-QA-Pytorch-FULL results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # LR-1E4-Bert-QA-Pytorch-FULL This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.4978 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.307 | 1.0 | 3290 | 1.2050 | | 0.947 | 2.0 | 6580 | 1.1626 | | 0.6696 | 3.0 | 9870 | 1.2209 | | 0.4711 | 4.0 | 13160 | 1.4978 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
Khushnur/t5-base-end2end-questions-generation_eli_squad_single
Khushnur
2023-07-07T20:10:55Z
161
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-07-07T18:37:34Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: t5-base-end2end-questions-generation_eli_squad_single results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-base-end2end-questions-generation_eli_squad_single This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.7312 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.4834 | 0.26 | 100 | 2.7458 | | 2.3191 | 0.51 | 200 | 2.7305 | | 2.2773 | 0.77 | 300 | 2.7290 | | 2.2385 | 1.02 | 400 | 2.7239 | | 2.1703 | 1.28 | 500 | 2.7328 | | 2.1666 | 1.54 | 600 | 2.7324 | | 2.1551 | 1.79 | 700 | 2.7325 | | 2.1437 | 2.05 | 800 | 2.7323 | | 2.0951 | 2.3 | 900 | 2.7331 | | 2.107 | 2.56 | 1000 | 2.7336 | | 2.1107 | 2.82 | 1100 | 2.7312 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
pineiden/nominal-groups-recognition-medical-disease-competencia2-bert-medical-ner
pineiden
2023-07-07T20:10:20Z
133
3
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "token-classification", "generated_from_trainer", "es", "license:openrail", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-07-07T14:20:19Z
--- language: - es license: openrail tags: - generated_from_trainer model-index: - name: nominal-groups-recognition-medical-disease-competencia2-bert-medical-ner results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # nominal-groups-recognition-medical-disease-competencia2-bert-medical-ner This model is a fine-tuned version of [ukkendane/bert-medical-ner](https://huggingface.co/ukkendane/bert-medical-ner) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3607 - Body Part Precision: 0.6555 - Body Part Recall: 0.7094 - Body Part F1: 0.6814 - Body Part Number: 413 - Disease Precision: 0.6835 - Disease Recall: 0.7067 - Disease F1: 0.6949 - Disease Number: 975 - Family Member Precision: 1.0 - Family Member Recall: 0.6 - Family Member F1: 0.7500 - Family Member Number: 30 - Medication Precision: 0.7647 - Medication Recall: 0.6989 - Medication F1: 0.7303 - Medication Number: 93 - Procedure Precision: 0.5385 - Procedure Recall: 0.5402 - Procedure F1: 0.5393 - Procedure Number: 311 - Overall Precision: 0.6594 - Overall Recall: 0.6767 - Overall F1: 0.6679 - Overall Accuracy: 0.9079 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 13 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Body Part Precision | Body Part Recall | Body Part F1 | Body Part Number | Disease Precision | Disease Recall | Disease F1 | Disease Number | Family Member Precision | Family Member Recall | Family Member F1 | Family Member Number | Medication Precision | Medication Recall | Medication F1 | Medication Number | Procedure Precision | Procedure Recall | Procedure F1 | Procedure Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:-------------------:|:----------------:|:------------:|:----------------:|:-----------------:|:--------------:|:----------:|:--------------:|:-----------------------:|:--------------------:|:----------------:|:--------------------:|:--------------------:|:-----------------:|:-------------:|:-----------------:|:-------------------:|:----------------:|:------------:|:----------------:|:-----------------:|:--------------:|:----------:|:----------------:| | 0.4541 | 1.0 | 8025 | 0.3607 | 0.6555 | 0.7094 | 0.6814 | 413 | 0.6835 | 0.7067 | 0.6949 | 975 | 1.0 | 0.6 | 0.7500 | 30 | 0.7647 | 0.6989 | 0.7303 | 93 | 0.5385 | 0.5402 | 0.5393 | 311 | 0.6594 | 0.6767 | 0.6679 | 0.9079 | | 0.3149 | 2.0 | 16050 | 0.3607 | 0.6555 | 0.7094 | 0.6814 | 413 | 0.6835 | 0.7067 | 0.6949 | 975 | 1.0 | 0.6 | 0.7500 | 30 | 0.7647 | 0.6989 | 0.7303 | 93 | 0.5385 | 0.5402 | 0.5393 | 311 | 0.6594 | 0.6767 | 0.6679 | 0.9079 | | 0.3161 | 3.0 | 24075 | 0.3607 | 0.6555 | 0.7094 | 0.6814 | 413 | 0.6835 | 0.7067 | 0.6949 | 975 | 1.0 | 0.6 | 0.7500 | 30 | 0.7647 | 0.6989 | 0.7303 | 93 | 0.5385 | 0.5402 | 0.5393 | 311 | 0.6594 | 0.6767 | 0.6679 | 0.9079 | | 0.3181 | 4.0 | 32100 | 0.3607 | 0.6555 | 0.7094 | 0.6814 | 413 | 0.6835 | 0.7067 | 0.6949 | 975 | 1.0 | 0.6 | 0.7500 | 30 | 0.7647 | 0.6989 | 0.7303 | 93 | 0.5385 | 0.5402 | 0.5393 | 311 | 0.6594 | 0.6767 | 0.6679 | 0.9079 | | 0.3164 | 5.0 | 40125 | 0.3607 | 0.6555 | 0.7094 | 0.6814 | 413 | 0.6835 | 0.7067 | 0.6949 | 975 | 1.0 | 0.6 | 0.7500 | 30 | 0.7647 | 0.6989 | 0.7303 | 93 | 0.5385 | 0.5402 | 0.5393 | 311 | 0.6594 | 0.6767 | 0.6679 | 0.9079 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu117 - Datasets 2.13.1 - Tokenizers 0.13.3
Jade1211/textual_inversion_baby
Jade1211
2023-07-07T20:10:18Z
5
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "textual_inversion", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-07T18:08:50Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - textual_inversion inference: true --- # Textual inversion text2image fine-tuning - Jade1211/textual_inversion_baby These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
GalSarid/setfit-movie-genre-sentence-t5-xl
GalSarid
2023-07-07T20:04:50Z
4
1
sentence-transformers
[ "sentence-transformers", "pytorch", "t5", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
2023-07-04T21:34:54Z
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # GalSarid/setfit-movie-genre-sentence-t5-xl This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("GalSarid/setfit-movie-genre-sentence-t5-xl") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
anttip/ct2fast-e5-small-v2-hfie
anttip
2023-07-07T20:04:37Z
8
2
transformers
[ "transformers", "bert", "feature-extraction", "ctranslate2", "int8", "float16", "mteb", "en", "arxiv:2212.03533", "arxiv:2104.08663", "arxiv:2210.07316", "license:mit", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2023-07-07T19:30:13Z
--- tags: - ctranslate2 - int8 - float16 - mteb model-index: - name: e5-small-v2 results: - task: type: Classification dataset: type: mteb/amazon_counterfactual name: MTEB AmazonCounterfactualClassification (en) config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 77.59701492537313 - type: ap value: 41.67064885731708 - type: f1 value: 71.86465946398573 - task: type: Classification dataset: type: mteb/amazon_polarity name: MTEB AmazonPolarityClassification config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 91.265875 - type: ap value: 87.67633085349644 - type: f1 value: 91.24297521425744 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (en) config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 45.882000000000005 - type: f1 value: 45.08058870381236 - task: type: Retrieval dataset: type: arguana name: MTEB ArguAna config: default split: test revision: None metrics: - type: map_at_1 value: 20.697 - type: map_at_10 value: 33.975 - type: map_at_100 value: 35.223 - type: map_at_1000 value: 35.260000000000005 - type: map_at_3 value: 29.776999999999997 - type: map_at_5 value: 32.035000000000004 - type: mrr_at_1 value: 20.982 - type: mrr_at_10 value: 34.094 - type: mrr_at_100 value: 35.343 - type: mrr_at_1000 value: 35.38 - type: mrr_at_3 value: 29.884 - type: mrr_at_5 value: 32.141999999999996 - type: ndcg_at_1 value: 20.697 - type: ndcg_at_10 value: 41.668 - type: ndcg_at_100 value: 47.397 - type: ndcg_at_1000 value: 48.305 - type: ndcg_at_3 value: 32.928000000000004 - type: ndcg_at_5 value: 36.998999999999995 - type: precision_at_1 value: 20.697 - type: precision_at_10 value: 6.636 - type: precision_at_100 value: 0.924 - type: precision_at_1000 value: 0.099 - type: precision_at_3 value: 14.035 - type: precision_at_5 value: 10.398 - type: recall_at_1 value: 20.697 - type: recall_at_10 value: 66.35799999999999 - type: recall_at_100 value: 92.39 - type: recall_at_1000 value: 99.36 - type: recall_at_3 value: 42.105 - type: recall_at_5 value: 51.991 - task: type: Clustering dataset: type: mteb/arxiv-clustering-p2p name: MTEB ArxivClusteringP2P config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 42.1169517447068 - task: type: Clustering dataset: type: mteb/arxiv-clustering-s2s name: MTEB ArxivClusteringS2S config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 34.79553720107097 - task: type: Reranking dataset: type: mteb/askubuntudupquestions-reranking name: MTEB AskUbuntuDupQuestions config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 58.10811337308168 - type: mrr value: 71.56410763751482 - task: type: STS dataset: type: mteb/biosses-sts name: MTEB BIOSSES config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 78.46834918248696 - type: cos_sim_spearman value: 79.4289182755206 - type: euclidean_pearson value: 76.26662973727008 - type: euclidean_spearman value: 78.11744260952536 - type: manhattan_pearson value: 76.08175262609434 - type: manhattan_spearman value: 78.29395265552289 - task: type: Classification dataset: type: mteb/banking77 name: MTEB Banking77Classification config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 81.63636363636364 - type: f1 value: 81.55779952376953 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-p2p name: MTEB BiorxivClusteringP2P config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 35.88541137137571 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-s2s name: MTEB BiorxivClusteringS2S config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 30.05205685274407 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackAndroidRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 30.293999999999997 - type: map_at_10 value: 39.876 - type: map_at_100 value: 41.315000000000005 - type: map_at_1000 value: 41.451 - type: map_at_3 value: 37.194 - type: map_at_5 value: 38.728 - type: mrr_at_1 value: 37.053000000000004 - type: mrr_at_10 value: 45.281 - type: mrr_at_100 value: 46.188 - type: mrr_at_1000 value: 46.245999999999995 - type: mrr_at_3 value: 43.228 - type: mrr_at_5 value: 44.366 - type: ndcg_at_1 value: 37.053000000000004 - type: ndcg_at_10 value: 45.086 - type: ndcg_at_100 value: 50.756 - type: ndcg_at_1000 value: 53.123 - type: ndcg_at_3 value: 41.416 - type: ndcg_at_5 value: 43.098 - type: precision_at_1 value: 37.053000000000004 - type: precision_at_10 value: 8.34 - type: precision_at_100 value: 1.346 - type: precision_at_1000 value: 0.186 - type: precision_at_3 value: 19.647000000000002 - type: precision_at_5 value: 13.877 - type: recall_at_1 value: 30.293999999999997 - type: recall_at_10 value: 54.309 - type: recall_at_100 value: 78.59 - type: recall_at_1000 value: 93.82300000000001 - type: recall_at_3 value: 43.168 - type: recall_at_5 value: 48.192 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackEnglishRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 28.738000000000003 - type: map_at_10 value: 36.925999999999995 - type: map_at_100 value: 38.017 - type: map_at_1000 value: 38.144 - type: map_at_3 value: 34.446 - type: map_at_5 value: 35.704 - type: mrr_at_1 value: 35.478 - type: mrr_at_10 value: 42.786 - type: mrr_at_100 value: 43.458999999999996 - type: mrr_at_1000 value: 43.507 - type: mrr_at_3 value: 40.648 - type: mrr_at_5 value: 41.804 - type: ndcg_at_1 value: 35.478 - type: ndcg_at_10 value: 42.044 - type: ndcg_at_100 value: 46.249 - type: ndcg_at_1000 value: 48.44 - type: ndcg_at_3 value: 38.314 - type: ndcg_at_5 value: 39.798 - type: precision_at_1 value: 35.478 - type: precision_at_10 value: 7.764 - type: precision_at_100 value: 1.253 - type: precision_at_1000 value: 0.174 - type: precision_at_3 value: 18.047 - type: precision_at_5 value: 12.637 - type: recall_at_1 value: 28.738000000000003 - type: recall_at_10 value: 50.659 - type: recall_at_100 value: 68.76299999999999 - type: recall_at_1000 value: 82.811 - type: recall_at_3 value: 39.536 - type: recall_at_5 value: 43.763999999999996 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackGamingRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 38.565 - type: map_at_10 value: 50.168 - type: map_at_100 value: 51.11 - type: map_at_1000 value: 51.173 - type: map_at_3 value: 47.044000000000004 - type: map_at_5 value: 48.838 - type: mrr_at_1 value: 44.201 - type: mrr_at_10 value: 53.596999999999994 - type: mrr_at_100 value: 54.211 - type: mrr_at_1000 value: 54.247 - type: mrr_at_3 value: 51.202000000000005 - type: mrr_at_5 value: 52.608999999999995 - type: ndcg_at_1 value: 44.201 - type: ndcg_at_10 value: 55.694 - type: ndcg_at_100 value: 59.518 - type: ndcg_at_1000 value: 60.907 - type: ndcg_at_3 value: 50.395999999999994 - type: ndcg_at_5 value: 53.022999999999996 - type: precision_at_1 value: 44.201 - type: precision_at_10 value: 8.84 - type: precision_at_100 value: 1.162 - type: precision_at_1000 value: 0.133 - type: precision_at_3 value: 22.153 - type: precision_at_5 value: 15.260000000000002 - type: recall_at_1 value: 38.565 - type: recall_at_10 value: 68.65 - type: recall_at_100 value: 85.37400000000001 - type: recall_at_1000 value: 95.37400000000001 - type: recall_at_3 value: 54.645999999999994 - type: recall_at_5 value: 60.958 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackGisRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 23.945 - type: map_at_10 value: 30.641000000000002 - type: map_at_100 value: 31.599 - type: map_at_1000 value: 31.691000000000003 - type: map_at_3 value: 28.405 - type: map_at_5 value: 29.704000000000004 - type: mrr_at_1 value: 25.537 - type: mrr_at_10 value: 32.22 - type: mrr_at_100 value: 33.138 - type: mrr_at_1000 value: 33.214 - type: mrr_at_3 value: 30.151 - type: mrr_at_5 value: 31.298 - type: ndcg_at_1 value: 25.537 - type: ndcg_at_10 value: 34.638000000000005 - type: ndcg_at_100 value: 39.486 - type: ndcg_at_1000 value: 41.936 - type: ndcg_at_3 value: 30.333 - type: ndcg_at_5 value: 32.482 - type: precision_at_1 value: 25.537 - type: precision_at_10 value: 5.153 - type: precision_at_100 value: 0.7929999999999999 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 12.429 - type: precision_at_5 value: 8.723 - type: recall_at_1 value: 23.945 - type: recall_at_10 value: 45.412 - type: recall_at_100 value: 67.836 - type: recall_at_1000 value: 86.467 - type: recall_at_3 value: 34.031 - type: recall_at_5 value: 39.039 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackMathematicaRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 14.419 - type: map_at_10 value: 20.858999999999998 - type: map_at_100 value: 22.067999999999998 - type: map_at_1000 value: 22.192 - type: map_at_3 value: 18.673000000000002 - type: map_at_5 value: 19.968 - type: mrr_at_1 value: 17.785999999999998 - type: mrr_at_10 value: 24.878 - type: mrr_at_100 value: 26.021 - type: mrr_at_1000 value: 26.095000000000002 - type: mrr_at_3 value: 22.616 - type: mrr_at_5 value: 23.785 - type: ndcg_at_1 value: 17.785999999999998 - type: ndcg_at_10 value: 25.153 - type: ndcg_at_100 value: 31.05 - type: ndcg_at_1000 value: 34.052 - type: ndcg_at_3 value: 21.117 - type: ndcg_at_5 value: 23.048 - type: precision_at_1 value: 17.785999999999998 - type: precision_at_10 value: 4.590000000000001 - type: precision_at_100 value: 0.864 - type: precision_at_1000 value: 0.125 - type: precision_at_3 value: 9.908999999999999 - type: precision_at_5 value: 7.313 - type: recall_at_1 value: 14.419 - type: recall_at_10 value: 34.477999999999994 - type: recall_at_100 value: 60.02499999999999 - type: recall_at_1000 value: 81.646 - type: recall_at_3 value: 23.515 - type: recall_at_5 value: 28.266999999999996 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackPhysicsRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 26.268 - type: map_at_10 value: 35.114000000000004 - type: map_at_100 value: 36.212 - type: map_at_1000 value: 36.333 - type: map_at_3 value: 32.436 - type: map_at_5 value: 33.992 - type: mrr_at_1 value: 31.761 - type: mrr_at_10 value: 40.355999999999995 - type: mrr_at_100 value: 41.125 - type: mrr_at_1000 value: 41.186 - type: mrr_at_3 value: 37.937 - type: mrr_at_5 value: 39.463 - type: ndcg_at_1 value: 31.761 - type: ndcg_at_10 value: 40.422000000000004 - type: ndcg_at_100 value: 45.458999999999996 - type: ndcg_at_1000 value: 47.951 - type: ndcg_at_3 value: 35.972 - type: ndcg_at_5 value: 38.272 - type: precision_at_1 value: 31.761 - type: precision_at_10 value: 7.103 - type: precision_at_100 value: 1.133 - type: precision_at_1000 value: 0.152 - type: precision_at_3 value: 16.779 - type: precision_at_5 value: 11.877 - type: recall_at_1 value: 26.268 - type: recall_at_10 value: 51.053000000000004 - type: recall_at_100 value: 72.702 - type: recall_at_1000 value: 89.521 - type: recall_at_3 value: 38.619 - type: recall_at_5 value: 44.671 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackProgrammersRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 25.230999999999998 - type: map_at_10 value: 34.227000000000004 - type: map_at_100 value: 35.370000000000005 - type: map_at_1000 value: 35.488 - type: map_at_3 value: 31.496000000000002 - type: map_at_5 value: 33.034 - type: mrr_at_1 value: 30.822 - type: mrr_at_10 value: 39.045 - type: mrr_at_100 value: 39.809 - type: mrr_at_1000 value: 39.873 - type: mrr_at_3 value: 36.663000000000004 - type: mrr_at_5 value: 37.964 - type: ndcg_at_1 value: 30.822 - type: ndcg_at_10 value: 39.472 - type: ndcg_at_100 value: 44.574999999999996 - type: ndcg_at_1000 value: 47.162 - type: ndcg_at_3 value: 34.929 - type: ndcg_at_5 value: 37.002 - type: precision_at_1 value: 30.822 - type: precision_at_10 value: 7.055 - type: precision_at_100 value: 1.124 - type: precision_at_1000 value: 0.152 - type: precision_at_3 value: 16.591 - type: precision_at_5 value: 11.667 - type: recall_at_1 value: 25.230999999999998 - type: recall_at_10 value: 50.42100000000001 - type: recall_at_100 value: 72.685 - type: recall_at_1000 value: 90.469 - type: recall_at_3 value: 37.503 - type: recall_at_5 value: 43.123 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 24.604166666666664 - type: map_at_10 value: 32.427166666666665 - type: map_at_100 value: 33.51474999999999 - type: map_at_1000 value: 33.6345 - type: map_at_3 value: 30.02366666666667 - type: map_at_5 value: 31.382333333333328 - type: mrr_at_1 value: 29.001166666666666 - type: mrr_at_10 value: 36.3315 - type: mrr_at_100 value: 37.16683333333333 - type: mrr_at_1000 value: 37.23341666666668 - type: mrr_at_3 value: 34.19916666666667 - type: mrr_at_5 value: 35.40458333333334 - type: ndcg_at_1 value: 29.001166666666666 - type: ndcg_at_10 value: 37.06883333333334 - type: ndcg_at_100 value: 41.95816666666666 - type: ndcg_at_1000 value: 44.501583333333336 - type: ndcg_at_3 value: 32.973499999999994 - type: ndcg_at_5 value: 34.90833333333334 - type: precision_at_1 value: 29.001166666666666 - type: precision_at_10 value: 6.336 - type: precision_at_100 value: 1.0282499999999999 - type: precision_at_1000 value: 0.14391666666666664 - type: precision_at_3 value: 14.932499999999996 - type: precision_at_5 value: 10.50825 - type: recall_at_1 value: 24.604166666666664 - type: recall_at_10 value: 46.9525 - type: recall_at_100 value: 68.67816666666667 - type: recall_at_1000 value: 86.59783333333334 - type: recall_at_3 value: 35.49783333333333 - type: recall_at_5 value: 40.52525000000001 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackStatsRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 23.559 - type: map_at_10 value: 29.023 - type: map_at_100 value: 29.818 - type: map_at_1000 value: 29.909000000000002 - type: map_at_3 value: 27.037 - type: map_at_5 value: 28.225 - type: mrr_at_1 value: 26.994 - type: mrr_at_10 value: 31.962000000000003 - type: mrr_at_100 value: 32.726 - type: mrr_at_1000 value: 32.800000000000004 - type: mrr_at_3 value: 30.266 - type: mrr_at_5 value: 31.208999999999996 - type: ndcg_at_1 value: 26.994 - type: ndcg_at_10 value: 32.53 - type: ndcg_at_100 value: 36.758 - type: ndcg_at_1000 value: 39.362 - type: ndcg_at_3 value: 28.985 - type: ndcg_at_5 value: 30.757 - type: precision_at_1 value: 26.994 - type: precision_at_10 value: 4.968999999999999 - type: precision_at_100 value: 0.759 - type: precision_at_1000 value: 0.106 - type: precision_at_3 value: 12.219 - type: precision_at_5 value: 8.527999999999999 - type: recall_at_1 value: 23.559 - type: recall_at_10 value: 40.585 - type: recall_at_100 value: 60.306000000000004 - type: recall_at_1000 value: 80.11 - type: recall_at_3 value: 30.794 - type: recall_at_5 value: 35.186 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackTexRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 16.384999999999998 - type: map_at_10 value: 22.142 - type: map_at_100 value: 23.057 - type: map_at_1000 value: 23.177 - type: map_at_3 value: 20.29 - type: map_at_5 value: 21.332 - type: mrr_at_1 value: 19.89 - type: mrr_at_10 value: 25.771 - type: mrr_at_100 value: 26.599 - type: mrr_at_1000 value: 26.680999999999997 - type: mrr_at_3 value: 23.962 - type: mrr_at_5 value: 24.934 - type: ndcg_at_1 value: 19.89 - type: ndcg_at_10 value: 25.97 - type: ndcg_at_100 value: 30.605 - type: ndcg_at_1000 value: 33.619 - type: ndcg_at_3 value: 22.704 - type: ndcg_at_5 value: 24.199 - type: precision_at_1 value: 19.89 - type: precision_at_10 value: 4.553 - type: precision_at_100 value: 0.8049999999999999 - type: precision_at_1000 value: 0.122 - type: precision_at_3 value: 10.541 - type: precision_at_5 value: 7.46 - type: recall_at_1 value: 16.384999999999998 - type: recall_at_10 value: 34.001 - type: recall_at_100 value: 55.17100000000001 - type: recall_at_1000 value: 77.125 - type: recall_at_3 value: 24.618000000000002 - type: recall_at_5 value: 28.695999999999998 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackUnixRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 23.726 - type: map_at_10 value: 31.227 - type: map_at_100 value: 32.311 - type: map_at_1000 value: 32.419 - type: map_at_3 value: 28.765 - type: map_at_5 value: 30.229 - type: mrr_at_1 value: 27.705000000000002 - type: mrr_at_10 value: 35.085 - type: mrr_at_100 value: 35.931000000000004 - type: mrr_at_1000 value: 36 - type: mrr_at_3 value: 32.603 - type: mrr_at_5 value: 34.117999999999995 - type: ndcg_at_1 value: 27.705000000000002 - type: ndcg_at_10 value: 35.968 - type: ndcg_at_100 value: 41.197 - type: ndcg_at_1000 value: 43.76 - type: ndcg_at_3 value: 31.304 - type: ndcg_at_5 value: 33.661 - type: precision_at_1 value: 27.705000000000002 - type: precision_at_10 value: 5.942 - type: precision_at_100 value: 0.964 - type: precision_at_1000 value: 0.13 - type: precision_at_3 value: 13.868 - type: precision_at_5 value: 9.944 - type: recall_at_1 value: 23.726 - type: recall_at_10 value: 46.786 - type: recall_at_100 value: 70.072 - type: recall_at_1000 value: 88.2 - type: recall_at_3 value: 33.981 - type: recall_at_5 value: 39.893 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackWebmastersRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 23.344 - type: map_at_10 value: 31.636999999999997 - type: map_at_100 value: 33.065 - type: map_at_1000 value: 33.300000000000004 - type: map_at_3 value: 29.351 - type: map_at_5 value: 30.432 - type: mrr_at_1 value: 27.866000000000003 - type: mrr_at_10 value: 35.587 - type: mrr_at_100 value: 36.52 - type: mrr_at_1000 value: 36.597 - type: mrr_at_3 value: 33.696 - type: mrr_at_5 value: 34.713 - type: ndcg_at_1 value: 27.866000000000003 - type: ndcg_at_10 value: 36.61 - type: ndcg_at_100 value: 41.88 - type: ndcg_at_1000 value: 45.105000000000004 - type: ndcg_at_3 value: 33.038000000000004 - type: ndcg_at_5 value: 34.331 - type: precision_at_1 value: 27.866000000000003 - type: precision_at_10 value: 6.917 - type: precision_at_100 value: 1.3599999999999999 - type: precision_at_1000 value: 0.233 - type: precision_at_3 value: 15.547 - type: precision_at_5 value: 10.791 - type: recall_at_1 value: 23.344 - type: recall_at_10 value: 45.782000000000004 - type: recall_at_100 value: 69.503 - type: recall_at_1000 value: 90.742 - type: recall_at_3 value: 35.160000000000004 - type: recall_at_5 value: 39.058 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackWordpressRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 20.776 - type: map_at_10 value: 27.285999999999998 - type: map_at_100 value: 28.235 - type: map_at_1000 value: 28.337 - type: map_at_3 value: 25.147000000000002 - type: map_at_5 value: 26.401999999999997 - type: mrr_at_1 value: 22.921 - type: mrr_at_10 value: 29.409999999999997 - type: mrr_at_100 value: 30.275000000000002 - type: mrr_at_1000 value: 30.354999999999997 - type: mrr_at_3 value: 27.418 - type: mrr_at_5 value: 28.592000000000002 - type: ndcg_at_1 value: 22.921 - type: ndcg_at_10 value: 31.239 - type: ndcg_at_100 value: 35.965 - type: ndcg_at_1000 value: 38.602 - type: ndcg_at_3 value: 27.174 - type: ndcg_at_5 value: 29.229 - type: precision_at_1 value: 22.921 - type: precision_at_10 value: 4.806 - type: precision_at_100 value: 0.776 - type: precision_at_1000 value: 0.11 - type: precision_at_3 value: 11.459999999999999 - type: precision_at_5 value: 8.022 - type: recall_at_1 value: 20.776 - type: recall_at_10 value: 41.294 - type: recall_at_100 value: 63.111 - type: recall_at_1000 value: 82.88600000000001 - type: recall_at_3 value: 30.403000000000002 - type: recall_at_5 value: 35.455999999999996 - task: type: Retrieval dataset: type: climate-fever name: MTEB ClimateFEVER config: default split: test revision: None metrics: - type: map_at_1 value: 9.376 - type: map_at_10 value: 15.926000000000002 - type: map_at_100 value: 17.585 - type: map_at_1000 value: 17.776 - type: map_at_3 value: 13.014000000000001 - type: map_at_5 value: 14.417 - type: mrr_at_1 value: 20.195 - type: mrr_at_10 value: 29.95 - type: mrr_at_100 value: 31.052000000000003 - type: mrr_at_1000 value: 31.108000000000004 - type: mrr_at_3 value: 26.667 - type: mrr_at_5 value: 28.458 - type: ndcg_at_1 value: 20.195 - type: ndcg_at_10 value: 22.871 - type: ndcg_at_100 value: 29.921999999999997 - type: ndcg_at_1000 value: 33.672999999999995 - type: ndcg_at_3 value: 17.782999999999998 - type: ndcg_at_5 value: 19.544 - type: precision_at_1 value: 20.195 - type: precision_at_10 value: 7.394 - type: precision_at_100 value: 1.493 - type: precision_at_1000 value: 0.218 - type: precision_at_3 value: 13.073 - type: precision_at_5 value: 10.436 - type: recall_at_1 value: 9.376 - type: recall_at_10 value: 28.544999999999998 - type: recall_at_100 value: 53.147999999999996 - type: recall_at_1000 value: 74.62 - type: recall_at_3 value: 16.464000000000002 - type: recall_at_5 value: 21.004 - task: type: Retrieval dataset: type: dbpedia-entity name: MTEB DBPedia config: default split: test revision: None metrics: - type: map_at_1 value: 8.415000000000001 - type: map_at_10 value: 18.738 - type: map_at_100 value: 27.291999999999998 - type: map_at_1000 value: 28.992 - type: map_at_3 value: 13.196 - type: map_at_5 value: 15.539 - type: mrr_at_1 value: 66.5 - type: mrr_at_10 value: 74.518 - type: mrr_at_100 value: 74.86 - type: mrr_at_1000 value: 74.87 - type: mrr_at_3 value: 72.375 - type: mrr_at_5 value: 73.86200000000001 - type: ndcg_at_1 value: 54.37499999999999 - type: ndcg_at_10 value: 41.317 - type: ndcg_at_100 value: 45.845 - type: ndcg_at_1000 value: 52.92 - type: ndcg_at_3 value: 44.983000000000004 - type: ndcg_at_5 value: 42.989 - type: precision_at_1 value: 66.5 - type: precision_at_10 value: 33.6 - type: precision_at_100 value: 10.972999999999999 - type: precision_at_1000 value: 2.214 - type: precision_at_3 value: 48.583 - type: precision_at_5 value: 42.15 - type: recall_at_1 value: 8.415000000000001 - type: recall_at_10 value: 24.953 - type: recall_at_100 value: 52.48199999999999 - type: recall_at_1000 value: 75.093 - type: recall_at_3 value: 14.341000000000001 - type: recall_at_5 value: 18.468 - task: type: Classification dataset: type: mteb/emotion name: MTEB EmotionClassification config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 47.06499999999999 - type: f1 value: 41.439327599975385 - task: type: Retrieval dataset: type: fever name: MTEB FEVER config: default split: test revision: None metrics: - type: map_at_1 value: 66.02 - type: map_at_10 value: 76.68599999999999 - type: map_at_100 value: 76.959 - type: map_at_1000 value: 76.972 - type: map_at_3 value: 75.024 - type: map_at_5 value: 76.153 - type: mrr_at_1 value: 71.197 - type: mrr_at_10 value: 81.105 - type: mrr_at_100 value: 81.232 - type: mrr_at_1000 value: 81.233 - type: mrr_at_3 value: 79.758 - type: mrr_at_5 value: 80.69 - type: ndcg_at_1 value: 71.197 - type: ndcg_at_10 value: 81.644 - type: ndcg_at_100 value: 82.645 - type: ndcg_at_1000 value: 82.879 - type: ndcg_at_3 value: 78.792 - type: ndcg_at_5 value: 80.528 - type: precision_at_1 value: 71.197 - type: precision_at_10 value: 10.206999999999999 - type: precision_at_100 value: 1.093 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 30.868000000000002 - type: precision_at_5 value: 19.559 - type: recall_at_1 value: 66.02 - type: recall_at_10 value: 92.50699999999999 - type: recall_at_100 value: 96.497 - type: recall_at_1000 value: 97.956 - type: recall_at_3 value: 84.866 - type: recall_at_5 value: 89.16199999999999 - task: type: Retrieval dataset: type: fiqa name: MTEB FiQA2018 config: default split: test revision: None metrics: - type: map_at_1 value: 17.948 - type: map_at_10 value: 29.833 - type: map_at_100 value: 31.487 - type: map_at_1000 value: 31.674000000000003 - type: map_at_3 value: 26.029999999999998 - type: map_at_5 value: 28.038999999999998 - type: mrr_at_1 value: 34.721999999999994 - type: mrr_at_10 value: 44.214999999999996 - type: mrr_at_100 value: 44.994 - type: mrr_at_1000 value: 45.051 - type: mrr_at_3 value: 41.667 - type: mrr_at_5 value: 43.032 - type: ndcg_at_1 value: 34.721999999999994 - type: ndcg_at_10 value: 37.434 - type: ndcg_at_100 value: 43.702000000000005 - type: ndcg_at_1000 value: 46.993 - type: ndcg_at_3 value: 33.56 - type: ndcg_at_5 value: 34.687 - type: precision_at_1 value: 34.721999999999994 - type: precision_at_10 value: 10.401 - type: precision_at_100 value: 1.7049999999999998 - type: precision_at_1000 value: 0.22799999999999998 - type: precision_at_3 value: 22.531000000000002 - type: precision_at_5 value: 16.42 - type: recall_at_1 value: 17.948 - type: recall_at_10 value: 45.062999999999995 - type: recall_at_100 value: 68.191 - type: recall_at_1000 value: 87.954 - type: recall_at_3 value: 31.112000000000002 - type: recall_at_5 value: 36.823 - task: type: Retrieval dataset: type: hotpotqa name: MTEB HotpotQA config: default split: test revision: None metrics: - type: map_at_1 value: 36.644 - type: map_at_10 value: 57.658 - type: map_at_100 value: 58.562000000000005 - type: map_at_1000 value: 58.62500000000001 - type: map_at_3 value: 54.022999999999996 - type: map_at_5 value: 56.293000000000006 - type: mrr_at_1 value: 73.288 - type: mrr_at_10 value: 80.51700000000001 - type: mrr_at_100 value: 80.72 - type: mrr_at_1000 value: 80.728 - type: mrr_at_3 value: 79.33200000000001 - type: mrr_at_5 value: 80.085 - type: ndcg_at_1 value: 73.288 - type: ndcg_at_10 value: 66.61 - type: ndcg_at_100 value: 69.723 - type: ndcg_at_1000 value: 70.96000000000001 - type: ndcg_at_3 value: 61.358999999999995 - type: ndcg_at_5 value: 64.277 - type: precision_at_1 value: 73.288 - type: precision_at_10 value: 14.17 - type: precision_at_100 value: 1.659 - type: precision_at_1000 value: 0.182 - type: precision_at_3 value: 39.487 - type: precision_at_5 value: 25.999 - type: recall_at_1 value: 36.644 - type: recall_at_10 value: 70.851 - type: recall_at_100 value: 82.94399999999999 - type: recall_at_1000 value: 91.134 - type: recall_at_3 value: 59.230000000000004 - type: recall_at_5 value: 64.997 - task: type: Classification dataset: type: mteb/imdb name: MTEB ImdbClassification config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 86.00280000000001 - type: ap value: 80.46302061021223 - type: f1 value: 85.9592921596419 - task: type: Retrieval dataset: type: msmarco name: MTEB MSMARCO config: default split: dev revision: None metrics: - type: map_at_1 value: 22.541 - type: map_at_10 value: 34.625 - type: map_at_100 value: 35.785 - type: map_at_1000 value: 35.831 - type: map_at_3 value: 30.823 - type: map_at_5 value: 32.967999999999996 - type: mrr_at_1 value: 23.180999999999997 - type: mrr_at_10 value: 35.207 - type: mrr_at_100 value: 36.315 - type: mrr_at_1000 value: 36.355 - type: mrr_at_3 value: 31.483 - type: mrr_at_5 value: 33.589999999999996 - type: ndcg_at_1 value: 23.195 - type: ndcg_at_10 value: 41.461 - type: ndcg_at_100 value: 47.032000000000004 - type: ndcg_at_1000 value: 48.199999999999996 - type: ndcg_at_3 value: 33.702 - type: ndcg_at_5 value: 37.522 - type: precision_at_1 value: 23.195 - type: precision_at_10 value: 6.526999999999999 - type: precision_at_100 value: 0.932 - type: precision_at_1000 value: 0.10300000000000001 - type: precision_at_3 value: 14.308000000000002 - type: precision_at_5 value: 10.507 - type: recall_at_1 value: 22.541 - type: recall_at_10 value: 62.524 - type: recall_at_100 value: 88.228 - type: recall_at_1000 value: 97.243 - type: recall_at_3 value: 41.38 - type: recall_at_5 value: 50.55 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (en) config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 92.69949840401279 - type: f1 value: 92.54141471311786 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (en) config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 72.56041951664386 - type: f1 value: 55.88499977508287 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (en) config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 71.62071284465365 - type: f1 value: 69.36717546572152 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (en) config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 76.35843981170142 - type: f1 value: 76.15496453538884 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-p2p name: MTEB MedrxivClusteringP2P config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 31.33664956793118 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-s2s name: MTEB MedrxivClusteringS2S config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 27.883839621715524 - task: type: Reranking dataset: type: mteb/mind_small name: MTEB MindSmallReranking config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 30.096874986740758 - type: mrr value: 30.97300481932132 - task: type: Retrieval dataset: type: nfcorpus name: MTEB NFCorpus config: default split: test revision: None metrics: - type: map_at_1 value: 5.4 - type: map_at_10 value: 11.852 - type: map_at_100 value: 14.758 - type: map_at_1000 value: 16.134 - type: map_at_3 value: 8.558 - type: map_at_5 value: 10.087 - type: mrr_at_1 value: 44.272 - type: mrr_at_10 value: 52.05800000000001 - type: mrr_at_100 value: 52.689 - type: mrr_at_1000 value: 52.742999999999995 - type: mrr_at_3 value: 50.205999999999996 - type: mrr_at_5 value: 51.367 - type: ndcg_at_1 value: 42.57 - type: ndcg_at_10 value: 32.449 - type: ndcg_at_100 value: 29.596 - type: ndcg_at_1000 value: 38.351 - type: ndcg_at_3 value: 37.044 - type: ndcg_at_5 value: 35.275 - type: precision_at_1 value: 44.272 - type: precision_at_10 value: 23.87 - type: precision_at_100 value: 7.625 - type: precision_at_1000 value: 2.045 - type: precision_at_3 value: 34.365 - type: precision_at_5 value: 30.341 - type: recall_at_1 value: 5.4 - type: recall_at_10 value: 15.943999999999999 - type: recall_at_100 value: 29.805 - type: recall_at_1000 value: 61.695 - type: recall_at_3 value: 9.539 - type: recall_at_5 value: 12.127 - task: type: Retrieval dataset: type: nq name: MTEB NQ config: default split: test revision: None metrics: - type: map_at_1 value: 36.047000000000004 - type: map_at_10 value: 51.6 - type: map_at_100 value: 52.449999999999996 - type: map_at_1000 value: 52.476 - type: map_at_3 value: 47.452 - type: map_at_5 value: 49.964 - type: mrr_at_1 value: 40.382 - type: mrr_at_10 value: 54.273 - type: mrr_at_100 value: 54.859 - type: mrr_at_1000 value: 54.876000000000005 - type: mrr_at_3 value: 51.014 - type: mrr_at_5 value: 52.983999999999995 - type: ndcg_at_1 value: 40.353 - type: ndcg_at_10 value: 59.11300000000001 - type: ndcg_at_100 value: 62.604000000000006 - type: ndcg_at_1000 value: 63.187000000000005 - type: ndcg_at_3 value: 51.513 - type: ndcg_at_5 value: 55.576 - type: precision_at_1 value: 40.353 - type: precision_at_10 value: 9.418 - type: precision_at_100 value: 1.1440000000000001 - type: precision_at_1000 value: 0.12 - type: precision_at_3 value: 23.078000000000003 - type: precision_at_5 value: 16.250999999999998 - type: recall_at_1 value: 36.047000000000004 - type: recall_at_10 value: 79.22200000000001 - type: recall_at_100 value: 94.23 - type: recall_at_1000 value: 98.51100000000001 - type: recall_at_3 value: 59.678 - type: recall_at_5 value: 68.967 - task: type: Retrieval dataset: type: quora name: MTEB QuoraRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 68.232 - type: map_at_10 value: 81.674 - type: map_at_100 value: 82.338 - type: map_at_1000 value: 82.36099999999999 - type: map_at_3 value: 78.833 - type: map_at_5 value: 80.58 - type: mrr_at_1 value: 78.64 - type: mrr_at_10 value: 85.164 - type: mrr_at_100 value: 85.317 - type: mrr_at_1000 value: 85.319 - type: mrr_at_3 value: 84.127 - type: mrr_at_5 value: 84.789 - type: ndcg_at_1 value: 78.63 - type: ndcg_at_10 value: 85.711 - type: ndcg_at_100 value: 87.238 - type: ndcg_at_1000 value: 87.444 - type: ndcg_at_3 value: 82.788 - type: ndcg_at_5 value: 84.313 - type: precision_at_1 value: 78.63 - type: precision_at_10 value: 12.977 - type: precision_at_100 value: 1.503 - type: precision_at_1000 value: 0.156 - type: precision_at_3 value: 36.113 - type: precision_at_5 value: 23.71 - type: recall_at_1 value: 68.232 - type: recall_at_10 value: 93.30199999999999 - type: recall_at_100 value: 98.799 - type: recall_at_1000 value: 99.885 - type: recall_at_3 value: 84.827 - type: recall_at_5 value: 89.188 - task: type: Clustering dataset: type: mteb/reddit-clustering name: MTEB RedditClustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 45.71879170816294 - task: type: Clustering dataset: type: mteb/reddit-clustering-p2p name: MTEB RedditClusteringP2P config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 59.65866311751794 - task: type: Retrieval dataset: type: scidocs name: MTEB SCIDOCS config: default split: test revision: None metrics: - type: map_at_1 value: 4.218 - type: map_at_10 value: 10.337 - type: map_at_100 value: 12.131 - type: map_at_1000 value: 12.411 - type: map_at_3 value: 7.4270000000000005 - type: map_at_5 value: 8.913 - type: mrr_at_1 value: 20.8 - type: mrr_at_10 value: 30.868000000000002 - type: mrr_at_100 value: 31.903 - type: mrr_at_1000 value: 31.972 - type: mrr_at_3 value: 27.367 - type: mrr_at_5 value: 29.372 - type: ndcg_at_1 value: 20.8 - type: ndcg_at_10 value: 17.765 - type: ndcg_at_100 value: 24.914 - type: ndcg_at_1000 value: 30.206 - type: ndcg_at_3 value: 16.64 - type: ndcg_at_5 value: 14.712 - type: precision_at_1 value: 20.8 - type: precision_at_10 value: 9.24 - type: precision_at_100 value: 1.9560000000000002 - type: precision_at_1000 value: 0.32299999999999995 - type: precision_at_3 value: 15.467 - type: precision_at_5 value: 12.94 - type: recall_at_1 value: 4.218 - type: recall_at_10 value: 18.752 - type: recall_at_100 value: 39.7 - type: recall_at_1000 value: 65.57300000000001 - type: recall_at_3 value: 9.428 - type: recall_at_5 value: 13.133000000000001 - task: type: STS dataset: type: mteb/sickr-sts name: MTEB SICK-R config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 83.04338850207233 - type: cos_sim_spearman value: 78.5054651430423 - type: euclidean_pearson value: 80.30739451228612 - type: euclidean_spearman value: 78.48377464299097 - type: manhattan_pearson value: 80.40795049052781 - type: manhattan_spearman value: 78.49506205443114 - task: type: STS dataset: type: mteb/sts12-sts name: MTEB STS12 config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 84.11596224442962 - type: cos_sim_spearman value: 76.20997388935461 - type: euclidean_pearson value: 80.56858451349109 - type: euclidean_spearman value: 75.92659183871186 - type: manhattan_pearson value: 80.60246102203844 - type: manhattan_spearman value: 76.03018971432664 - task: type: STS dataset: type: mteb/sts13-sts name: MTEB STS13 config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 81.34691640755737 - type: cos_sim_spearman value: 82.4018369631579 - type: euclidean_pearson value: 81.87673092245366 - type: euclidean_spearman value: 82.3671489960678 - type: manhattan_pearson value: 81.88222387719948 - type: manhattan_spearman value: 82.3816590344736 - task: type: STS dataset: type: mteb/sts14-sts name: MTEB STS14 config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 81.2836092579524 - type: cos_sim_spearman value: 78.99982781772064 - type: euclidean_pearson value: 80.5184271010527 - type: euclidean_spearman value: 78.89777392101904 - type: manhattan_pearson value: 80.53585705018664 - type: manhattan_spearman value: 78.92898405472994 - task: type: STS dataset: type: mteb/sts15-sts name: MTEB STS15 config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 86.7349907750784 - type: cos_sim_spearman value: 87.7611234446225 - type: euclidean_pearson value: 86.98759326731624 - type: euclidean_spearman value: 87.58321319424618 - type: manhattan_pearson value: 87.03483090370842 - type: manhattan_spearman value: 87.63278333060288 - task: type: STS dataset: type: mteb/sts16-sts name: MTEB STS16 config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 81.75873694924825 - type: cos_sim_spearman value: 83.80237999094724 - type: euclidean_pearson value: 83.55023725861537 - type: euclidean_spearman value: 84.12744338577744 - type: manhattan_pearson value: 83.58816983036232 - type: manhattan_spearman value: 84.18520748676501 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (en-en) config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 87.21630882940174 - type: cos_sim_spearman value: 87.72382883437031 - type: euclidean_pearson value: 88.69933350930333 - type: euclidean_spearman value: 88.24660814383081 - type: manhattan_pearson value: 88.77331018833499 - type: manhattan_spearman value: 88.26109989380632 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (en) config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 61.11854063060489 - type: cos_sim_spearman value: 63.14678634195072 - type: euclidean_pearson value: 61.679090067000864 - type: euclidean_spearman value: 62.28876589509653 - type: manhattan_pearson value: 62.082324165511004 - type: manhattan_spearman value: 62.56030932816679 - task: type: STS dataset: type: mteb/stsbenchmark-sts name: MTEB STSBenchmark config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 84.00319882832645 - type: cos_sim_spearman value: 85.94529772647257 - type: euclidean_pearson value: 85.6661390122756 - type: euclidean_spearman value: 85.97747815545827 - type: manhattan_pearson value: 85.58422770541893 - type: manhattan_spearman value: 85.9237139181532 - task: type: Reranking dataset: type: mteb/scidocs-reranking name: MTEB SciDocsRR config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 79.16198731863916 - type: mrr value: 94.25202702163487 - task: type: Retrieval dataset: type: scifact name: MTEB SciFact config: default split: test revision: None metrics: - type: map_at_1 value: 54.761 - type: map_at_10 value: 64.396 - type: map_at_100 value: 65.07 - type: map_at_1000 value: 65.09899999999999 - type: map_at_3 value: 61.846000000000004 - type: map_at_5 value: 63.284 - type: mrr_at_1 value: 57.667 - type: mrr_at_10 value: 65.83099999999999 - type: mrr_at_100 value: 66.36800000000001 - type: mrr_at_1000 value: 66.39399999999999 - type: mrr_at_3 value: 64.056 - type: mrr_at_5 value: 65.206 - type: ndcg_at_1 value: 57.667 - type: ndcg_at_10 value: 68.854 - type: ndcg_at_100 value: 71.59100000000001 - type: ndcg_at_1000 value: 72.383 - type: ndcg_at_3 value: 64.671 - type: ndcg_at_5 value: 66.796 - type: precision_at_1 value: 57.667 - type: precision_at_10 value: 9.167 - type: precision_at_100 value: 1.053 - type: precision_at_1000 value: 0.11199999999999999 - type: precision_at_3 value: 25.444 - type: precision_at_5 value: 16.667 - type: recall_at_1 value: 54.761 - type: recall_at_10 value: 80.9 - type: recall_at_100 value: 92.767 - type: recall_at_1000 value: 99 - type: recall_at_3 value: 69.672 - type: recall_at_5 value: 75.083 - task: type: PairClassification dataset: type: mteb/sprintduplicatequestions-pairclassification name: MTEB SprintDuplicateQuestions config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.8079207920792 - type: cos_sim_ap value: 94.88470927617445 - type: cos_sim_f1 value: 90.08179959100204 - type: cos_sim_precision value: 92.15481171548117 - type: cos_sim_recall value: 88.1 - type: dot_accuracy value: 99.58613861386138 - type: dot_ap value: 82.94822578881316 - type: dot_f1 value: 77.33333333333333 - type: dot_precision value: 79.36842105263158 - type: dot_recall value: 75.4 - type: euclidean_accuracy value: 99.8069306930693 - type: euclidean_ap value: 94.81367858031837 - type: euclidean_f1 value: 90.01009081735621 - type: euclidean_precision value: 90.83503054989816 - type: euclidean_recall value: 89.2 - type: manhattan_accuracy value: 99.81188118811882 - type: manhattan_ap value: 94.91405337220161 - type: manhattan_f1 value: 90.2763561924258 - type: manhattan_precision value: 92.45283018867924 - type: manhattan_recall value: 88.2 - type: max_accuracy value: 99.81188118811882 - type: max_ap value: 94.91405337220161 - type: max_f1 value: 90.2763561924258 - task: type: Clustering dataset: type: mteb/stackexchange-clustering name: MTEB StackExchangeClustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 58.511599500053094 - task: type: Clustering dataset: type: mteb/stackexchange-clustering-p2p name: MTEB StackExchangeClusteringP2P config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 31.984728147814707 - task: type: Reranking dataset: type: mteb/stackoverflowdupquestions-reranking name: MTEB StackOverflowDupQuestions config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 49.93428193939015 - type: mrr value: 50.916557911043206 - task: type: Summarization dataset: type: mteb/summeval name: MTEB SummEval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 31.562500894537145 - type: cos_sim_spearman value: 31.162587976726307 - type: dot_pearson value: 22.633662187735762 - type: dot_spearman value: 22.723000282378962 - task: type: Retrieval dataset: type: trec-covid name: MTEB TRECCOVID config: default split: test revision: None metrics: - type: map_at_1 value: 0.219 - type: map_at_10 value: 1.871 - type: map_at_100 value: 10.487 - type: map_at_1000 value: 25.122 - type: map_at_3 value: 0.657 - type: map_at_5 value: 1.0699999999999998 - type: mrr_at_1 value: 84 - type: mrr_at_10 value: 89.567 - type: mrr_at_100 value: 89.748 - type: mrr_at_1000 value: 89.748 - type: mrr_at_3 value: 88.667 - type: mrr_at_5 value: 89.567 - type: ndcg_at_1 value: 80 - type: ndcg_at_10 value: 74.533 - type: ndcg_at_100 value: 55.839000000000006 - type: ndcg_at_1000 value: 49.748 - type: ndcg_at_3 value: 79.53099999999999 - type: ndcg_at_5 value: 78.245 - type: precision_at_1 value: 84 - type: precision_at_10 value: 78.4 - type: precision_at_100 value: 56.99999999999999 - type: precision_at_1000 value: 21.98 - type: precision_at_3 value: 85.333 - type: precision_at_5 value: 84.8 - type: recall_at_1 value: 0.219 - type: recall_at_10 value: 2.02 - type: recall_at_100 value: 13.555 - type: recall_at_1000 value: 46.739999999999995 - type: recall_at_3 value: 0.685 - type: recall_at_5 value: 1.13 - task: type: Retrieval dataset: type: webis-touche2020 name: MTEB Touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 3.5029999999999997 - type: map_at_10 value: 11.042 - type: map_at_100 value: 16.326999999999998 - type: map_at_1000 value: 17.836 - type: map_at_3 value: 6.174 - type: map_at_5 value: 7.979 - type: mrr_at_1 value: 42.857 - type: mrr_at_10 value: 52.617000000000004 - type: mrr_at_100 value: 53.351000000000006 - type: mrr_at_1000 value: 53.351000000000006 - type: mrr_at_3 value: 46.939 - type: mrr_at_5 value: 50.714000000000006 - type: ndcg_at_1 value: 38.775999999999996 - type: ndcg_at_10 value: 27.125 - type: ndcg_at_100 value: 35.845 - type: ndcg_at_1000 value: 47.377 - type: ndcg_at_3 value: 29.633 - type: ndcg_at_5 value: 28.378999999999998 - type: precision_at_1 value: 42.857 - type: precision_at_10 value: 24.082 - type: precision_at_100 value: 6.877999999999999 - type: precision_at_1000 value: 1.463 - type: precision_at_3 value: 29.932 - type: precision_at_5 value: 28.571 - type: recall_at_1 value: 3.5029999999999997 - type: recall_at_10 value: 17.068 - type: recall_at_100 value: 43.361 - type: recall_at_1000 value: 78.835 - type: recall_at_3 value: 6.821000000000001 - type: recall_at_5 value: 10.357 - task: type: Classification dataset: type: mteb/toxic_conversations_50k name: MTEB ToxicConversationsClassification config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 71.0954 - type: ap value: 14.216844153511959 - type: f1 value: 54.63687418565117 - task: type: Classification dataset: type: mteb/tweet_sentiment_extraction name: MTEB TweetSentimentExtractionClassification config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 61.46293152235427 - type: f1 value: 61.744177921638645 - task: type: Clustering dataset: type: mteb/twentynewsgroups-clustering name: MTEB TwentyNewsgroupsClustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 41.12708617788644 - task: type: PairClassification dataset: type: mteb/twittersemeval2015-pairclassification name: MTEB TwitterSemEval2015 config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 85.75430649102938 - type: cos_sim_ap value: 73.34252536948081 - type: cos_sim_f1 value: 67.53758935173774 - type: cos_sim_precision value: 63.3672525439408 - type: cos_sim_recall value: 72.29551451187335 - type: dot_accuracy value: 81.71305954580676 - type: dot_ap value: 59.5532209082386 - type: dot_f1 value: 56.18466898954705 - type: dot_precision value: 47.830923248053395 - type: dot_recall value: 68.07387862796834 - type: euclidean_accuracy value: 85.81987244441795 - type: euclidean_ap value: 73.34325409809446 - type: euclidean_f1 value: 67.83451360417443 - type: euclidean_precision value: 64.09955388588871 - type: euclidean_recall value: 72.0316622691293 - type: manhattan_accuracy value: 85.68277999642368 - type: manhattan_ap value: 73.1535450121903 - type: manhattan_f1 value: 67.928237896289 - type: manhattan_precision value: 63.56945722171113 - type: manhattan_recall value: 72.9287598944591 - type: max_accuracy value: 85.81987244441795 - type: max_ap value: 73.34325409809446 - type: max_f1 value: 67.928237896289 - task: type: PairClassification dataset: type: mteb/twitterurlcorpus-pairclassification name: MTEB TwitterURLCorpus config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 88.90441262079403 - type: cos_sim_ap value: 85.79331880741438 - type: cos_sim_f1 value: 78.31563529842548 - type: cos_sim_precision value: 74.6683424102779 - type: cos_sim_recall value: 82.33754234678165 - type: dot_accuracy value: 84.89928978926534 - type: dot_ap value: 75.25819218316 - type: dot_f1 value: 69.88730119720536 - type: dot_precision value: 64.23362374959665 - type: dot_recall value: 76.63227594702803 - type: euclidean_accuracy value: 89.01695967710637 - type: euclidean_ap value: 85.98986606038852 - type: euclidean_f1 value: 78.5277880014722 - type: euclidean_precision value: 75.22211253701876 - type: euclidean_recall value: 82.13735756082538 - type: manhattan_accuracy value: 88.99561454573679 - type: manhattan_ap value: 85.92262421793953 - type: manhattan_f1 value: 78.38866094740769 - type: manhattan_precision value: 76.02373028505282 - type: manhattan_recall value: 80.9054511857099 - type: max_accuracy value: 89.01695967710637 - type: max_ap value: 85.98986606038852 - type: max_f1 value: 78.5277880014722 language: - en license: mit duplicated_from: michaelfeil/ct2fast-e5-small-v2 --- # # Hugging Face Inference Endpoints -compatible version of michaelfeil/ct2fast-e5-small-v2 Duplicate of michaelfeil/ct2fast-e5-small-v2, modified to run on Hugging Face Inference Endpoints. Requires a GPU Instance type to run. Creates symbolic links so that ctranslate2 reads the repository model without downloading from HF. # # Fast-Inference with Ctranslate2 Speedup inference while reducing memory by 2x-4x using int8 inference in C++ on CPU or GPU. quantized version of [intfloat/e5-small-v2](https://huggingface.co/intfloat/e5-small-v2) ```bash pip install hf-hub-ctranslate2>=2.12.0 ctranslate2>=3.16.0 ``` ```python # from transformers import AutoTokenizer model_name = "michaelfeil/ct2fast-e5-small-v2" model_name_orig="intfloat/e5-small-v2" from hf_hub_ctranslate2 import EncoderCT2fromHfHub model = EncoderCT2fromHfHub( # load in int8 on CUDA model_name_or_path=model_name, device="cuda", compute_type="int8_float16" ) outputs = model.generate( text=["I like soccer", "I like tennis", "The eiffel tower is in Paris"] ) # perform downstream tasks on outputs outputs["pooler_output"] outputs["last_hidden_state"] outputs["attention_mask"] # alternative, use SentenceTransformer Mix-In # for end-to-end Sentence embeddings generation # (not pulling from this CT2fast-HF repo) from hf_hub_ctranslate2 import CT2SentenceTransformer model = CT2SentenceTransformer( model_name_orig, compute_type="int8_float16", device="cuda" ) embeddings = model.encode( ["I like soccer", "I like tennis", "The eiffel tower is in Paris"], batch_size=32, convert_to_numpy=True, normalize_embeddings=True, ) print(embeddings.shape, embeddings) scores = (embeddings @ embeddings.T) * 100 ``` Checkpoint compatible to [ctranslate2>=3.16.0](https://github.com/OpenNMT/CTranslate2) and [hf-hub-ctranslate2>=2.12.0](https://github.com/michaelfeil/hf-hub-ctranslate2) - `compute_type=int8_float16` for `device="cuda"` - `compute_type=int8` for `device="cpu"` Converted on 2023-06-19 using ``` ct2-transformers-converter --model intfloat/e5-small-v2 --output_dir ~/tmp-ct2fast-e5-small-v2 --force --copy_files tokenizer.json modules.json README.md tokenizer_config.json sentence_bert_config.json vocab.txt special_tokens_map.json .gitattributes --trust_remote_code ``` # Licence and other remarks: This is just a quantized version. Licence conditions are intended to be idential to original huggingface repo. # Original description # E5-small-v2 [Text Embeddings by Weakly-Supervised Contrastive Pre-training](https://arxiv.org/pdf/2212.03533.pdf). Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, Furu Wei, arXiv 2022 This model has 12 layers and the embedding size is 384. ## Usage Below is an example to encode queries and passages from the MS-MARCO passage ranking dataset. ```python import torch.nn.functional as F from torch import Tensor from transformers import AutoTokenizer, AutoModel def average_pool(last_hidden_states: Tensor, attention_mask: Tensor) -> Tensor: last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0) return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None] # Each input text should start with "query: " or "passage: ". # For tasks other than retrieval, you can simply use the "query: " prefix. input_texts = ['query: how much protein should a female eat', 'query: summit define', "passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.", "passage: Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."] tokenizer = AutoTokenizer.from_pretrained('intfloat/e5-small-v2') model = AutoModel.from_pretrained('intfloat/e5-small-v2') # Tokenize the input texts batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt') outputs = model(**batch_dict) embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask']) # (Optionally) normalize embeddings embeddings = F.normalize(embeddings, p=2, dim=1) scores = (embeddings[:2] @ embeddings[2:].T) * 100 print(scores.tolist()) ``` ## Training Details Please refer to our paper at [https://arxiv.org/pdf/2212.03533.pdf](https://arxiv.org/pdf/2212.03533.pdf). ## Benchmark Evaluation Check out [unilm/e5](https://github.com/microsoft/unilm/tree/master/e5) to reproduce evaluation results on the [BEIR](https://arxiv.org/abs/2104.08663) and [MTEB benchmark](https://arxiv.org/abs/2210.07316). ## Citation If you find our paper or models helpful, please consider cite as follows: ``` @article{wang2022text, title={Text Embeddings by Weakly-Supervised Contrastive Pre-training}, author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Jiao, Binxing and Yang, Linjun and Jiang, Daxin and Majumder, Rangan and Wei, Furu}, journal={arXiv preprint arXiv:2212.03533}, year={2022} } ``` ## Limitations This model only works for English texts. Long texts will be truncated to at most 512 tokens. ## Sentence Transformers Below is an example for usage with sentence_transformers. `pip install sentence_transformers~=2.2.2` This is community contributed, and results may vary up to numerical precision. ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer('intfloat/e5-small-v2') embeddings = model.encode(input_texts, normalize_embeddings=True) ```
said10/my_test_q_a_demo_model
said10
2023-07-07T19:57:02Z
61
0
transformers
[ "transformers", "tf", "distilbert", "question-answering", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-07-07T19:44:10Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: said10/my_test_q_a_demo_model results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # said10/my_test_q_a_demo_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.5327 - Validation Loss: 1.7084 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 3.4141 | 2.0750 | 0 | | 1.7894 | 1.7084 | 1 | | 1.5327 | 1.7084 | 2 | ### Framework versions - Transformers 4.30.2 - TensorFlow 2.12.0 - Datasets 2.13.1 - Tokenizers 0.13.3
Manab/donut-base-my_model_rapido_2_new
Manab
2023-07-07T19:53:00Z
47
0
transformers
[ "transformers", "pytorch", "tensorboard", "vision-encoder-decoder", "image-text-to-text", "generated_from_trainer", "dataset:imagefolder", "base_model:naver-clova-ix/donut-base", "base_model:finetune:naver-clova-ix/donut-base", "license:mit", "endpoints_compatible", "region:us" ]
image-text-to-text
2023-07-07T19:37:08Z
--- license: mit base_model: naver-clova-ix/donut-base tags: - generated_from_trainer datasets: - imagefolder model-index: - name: donut-base-my_model_rapido_2_new results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # donut-base-my_model_rapido_2_new This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.3339 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 8.7865 | 0.69 | 50 | 4.5856 | | 3.8525 | 1.39 | 100 | 2.1648 | | 2.1251 | 2.08 | 150 | 1.4722 | | 1.6172 | 2.78 | 200 | 1.3339 | ### Framework versions - Transformers 4.31.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
gbellamy/Reinforce-Pixelcopter-PLE-v0
gbellamy
2023-07-07T19:48:12Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-06-27T00:29:22Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter-PLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 60.90 +/- 54.12 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
ashna022/ppo-lunarLander
ashna022
2023-07-07T19:46:48Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-07T19:31:11Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 238.84 +/- 66.23 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) ```python from stable_baselines3 import PPO from stable_baselines3.common.env_util import make_vec_env from stable_baselines3.common.evaluation import evaluate_policy from huggingface_sb3 import load_from_hub # Get model repo_id = "ashna022/ppo-lunarLander" filename = "ppo-lunarLander" checkpoint = load_from_hub(repo_id, filename) model = PPO.load(checkpoint, custom_objects=custom_objects, print_system_info=True) # Evaluate eval_env = Monitor(gym.make("LunarLander-v2")) mean_reward, std_reward = evaluate_policy(model, eval_env, n_eval_episodes=10, deterministic=True) print(f"mean_reward={mean_reward:.2f} +/- {std_reward}") ```
aaaririri/distilbert-base-uncased-finetuned-cola
aaaririri
2023-07-07T19:36:26Z
61
0
transformers
[ "transformers", "tf", "tensorboard", "distilbert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-07T18:10:42Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: aaaririri/distilbert-base-uncased-finetuned-cola results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # aaaririri/distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1971 - Validation Loss: 0.5365 - Train Matthews Correlation: 0.5219 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1602, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Matthews Correlation | Epoch | |:----------:|:---------------:|:--------------------------:|:-----:| | 0.5210 | 0.4488 | 0.4980 | 0 | | 0.3234 | 0.4759 | 0.4939 | 1 | | 0.1971 | 0.5365 | 0.5219 | 2 | ### Framework versions - Transformers 4.30.2 - TensorFlow 2.12.0 - Datasets 2.13.1 - Tokenizers 0.13.3
nolanaatama/dnldtrmprvcv2300pch0x3e9
nolanaatama
2023-07-07T19:16:55Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-07T19:10:08Z
--- license: creativeml-openrail-m ---
VK246/IC_ver2_coco_swin_gpt2_5pc_1e
VK246
2023-07-07T19:14:13Z
45
0
transformers
[ "transformers", "pytorch", "tensorboard", "vision-encoder-decoder", "image-text-to-text", "generated_from_trainer", "dataset:coco", "endpoints_compatible", "region:us" ]
image-text-to-text
2023-07-07T18:51:35Z
--- tags: - generated_from_trainer datasets: - coco metrics: - rouge - bleu model-index: - name: IC_ver2_coco_swin_gpt2_5pc_1e results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # IC_ver2_coco_swin_gpt2_5pc_1e This model is a fine-tuned version of [](https://huggingface.co/) on the coco dataset. It achieves the following results on the evaluation set: - Loss: 0.9972 - Rouge1: 34.8608 - Rouge2: 10.9857 - Rougel: 32.1905 - Rougelsum: 32.1794 - Bleu: 6.1162 - Gen Len: 11.2887 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:------:|:-------:| | 1.4995 | 0.23 | 100 | 1.1227 | 28.4179 | 5.9026 | 26.5601 | 26.5682 | 3.0237 | 11.2887 | | 1.136 | 0.45 | 200 | 1.0506 | 31.4866 | 8.9504 | 29.1403 | 29.0996 | 4.2965 | 11.2887 | | 1.0899 | 0.68 | 300 | 1.0203 | 33.9899 | 10.3576 | 31.6646 | 31.6435 | 5.5456 | 11.2887 | | 1.057 | 0.9 | 400 | 0.9972 | 34.8608 | 10.9857 | 32.1905 | 32.1794 | 6.1162 | 11.2887 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
bpw1621/ppo-LunarLander-v2
bpw1621
2023-07-07T19:03:06Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-07T19:02:46Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 263.55 +/- 22.12 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
andressrg/textual_inversion_meal_3
andressrg
2023-07-07T18:56:04Z
30
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "textual_inversion", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-07T18:43:57Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - textual_inversion inference: true --- # Textual inversion text2image fine-tuning - andressrg/textual_inversion_meal_3 These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
J-Douglas/pixel-character-lora
J-Douglas
2023-07-07T18:41:40Z
6
2
diffusers
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "lora", "base_model:CompVis/stable-diffusion-v1-1", "base_model:adapter:CompVis/stable-diffusion-v1-1", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-07-07T00:52:35Z
--- license: creativeml-openrail-m base_model: CompVis/stable-diffusion-v1-1 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA text2image fine-tuning - J-Douglas/pixel-characters These are LoRA adaption weights for CompVis/stable-diffusion-v1-1. The weights were fine-tuned on the m1guelpf/nouns dataset. You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png)
jordyvl/EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-07-07_went-g025
jordyvl
2023-07-07T18:32:59Z
104
0
transformers
[ "transformers", "pytorch", "layoutlmv3", "text-classification", "generated_from_trainer", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-07T16:23:14Z
--- license: cc-by-nc-sa-4.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-07-07_went-g025 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-07-07_went-g025 This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0715 - Accuracy: 0.7275 - Exit 0 Accuracy: 0.1125 - Exit 1 Accuracy: 0.1525 - Exit 2 Accuracy: 0.185 - Exit 3 Accuracy: 0.0625 - Exit 4 Accuracy: 0.0625 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 12 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 24 - total_train_batch_size: 288 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 60 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Exit 0 Accuracy | Exit 1 Accuracy | Exit 2 Accuracy | Exit 3 Accuracy | Exit 4 Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------------:|:---------------:|:---------------:|:---------------:|:---------------:| | No log | 0.72 | 2 | 2.7601 | 0.11 | 0.1025 | 0.0675 | 0.0825 | 0.0625 | 0.0625 | | No log | 1.72 | 4 | 2.7312 | 0.115 | 0.1025 | 0.065 | 0.085 | 0.0625 | 0.0625 | | No log | 2.72 | 6 | 2.6966 | 0.1325 | 0.1025 | 0.06 | 0.0975 | 0.0625 | 0.0625 | | No log | 3.72 | 8 | 2.6638 | 0.1725 | 0.1375 | 0.055 | 0.115 | 0.0625 | 0.0625 | | No log | 4.72 | 10 | 2.6223 | 0.195 | 0.1375 | 0.0575 | 0.1125 | 0.0625 | 0.0625 | | No log | 5.72 | 12 | 2.5770 | 0.215 | 0.13 | 0.08 | 0.115 | 0.0625 | 0.0625 | | No log | 6.72 | 14 | 2.5537 | 0.21 | 0.12 | 0.08 | 0.1125 | 0.0625 | 0.0625 | | No log | 7.72 | 16 | 2.5364 | 0.22 | 0.1275 | 0.09 | 0.1175 | 0.0625 | 0.0625 | | No log | 8.72 | 18 | 2.5008 | 0.2475 | 0.125 | 0.095 | 0.12 | 0.0625 | 0.0625 | | No log | 9.72 | 20 | 2.4477 | 0.2675 | 0.115 | 0.0925 | 0.115 | 0.0625 | 0.0625 | | No log | 10.72 | 22 | 2.3972 | 0.3075 | 0.115 | 0.12 | 0.1175 | 0.0625 | 0.0625 | | No log | 11.72 | 24 | 2.3565 | 0.32 | 0.1125 | 0.11 | 0.1125 | 0.0625 | 0.0625 | | No log | 12.72 | 26 | 2.2957 | 0.3425 | 0.1075 | 0.115 | 0.115 | 0.0625 | 0.0625 | | No log | 13.72 | 28 | 2.2355 | 0.3575 | 0.105 | 0.115 | 0.1175 | 0.0625 | 0.0625 | | No log | 14.72 | 30 | 2.1916 | 0.3625 | 0.1075 | 0.125 | 0.1275 | 0.0625 | 0.0625 | | No log | 15.72 | 32 | 2.1467 | 0.3825 | 0.1075 | 0.13 | 0.1225 | 0.0625 | 0.0625 | | No log | 16.72 | 34 | 2.0775 | 0.405 | 0.1075 | 0.1375 | 0.1225 | 0.0625 | 0.0625 | | No log | 17.72 | 36 | 2.0176 | 0.435 | 0.1125 | 0.1375 | 0.1225 | 0.0625 | 0.0625 | | No log | 18.72 | 38 | 1.9539 | 0.4725 | 0.115 | 0.1375 | 0.1225 | 0.0625 | 0.0625 | | No log | 19.72 | 40 | 1.9007 | 0.485 | 0.105 | 0.14 | 0.1225 | 0.0625 | 0.0625 | | No log | 20.72 | 42 | 1.8501 | 0.52 | 0.1075 | 0.14 | 0.1275 | 0.0625 | 0.0625 | | No log | 21.72 | 44 | 1.7795 | 0.5475 | 0.1075 | 0.14 | 0.125 | 0.0625 | 0.0625 | | No log | 22.72 | 46 | 1.7139 | 0.565 | 0.11 | 0.14 | 0.1275 | 0.0625 | 0.0625 | | No log | 23.72 | 48 | 1.6892 | 0.57 | 0.1125 | 0.14 | 0.13 | 0.0625 | 0.0625 | | No log | 24.72 | 50 | 1.6345 | 0.5875 | 0.11 | 0.1425 | 0.1275 | 0.0625 | 0.0625 | | No log | 25.72 | 52 | 1.5737 | 0.5975 | 0.1125 | 0.1475 | 0.1275 | 0.0625 | 0.0625 | | No log | 26.72 | 54 | 1.5422 | 0.6 | 0.1125 | 0.1475 | 0.135 | 0.0625 | 0.0625 | | No log | 27.72 | 56 | 1.5227 | 0.6125 | 0.115 | 0.1475 | 0.1375 | 0.0625 | 0.0625 | | No log | 28.72 | 58 | 1.4674 | 0.64 | 0.115 | 0.1475 | 0.1425 | 0.0625 | 0.0625 | | No log | 29.72 | 60 | 1.4152 | 0.65 | 0.115 | 0.1475 | 0.1425 | 0.0625 | 0.0625 | | No log | 30.72 | 62 | 1.4002 | 0.6575 | 0.115 | 0.1475 | 0.145 | 0.0625 | 0.0625 | | No log | 31.72 | 64 | 1.3922 | 0.6625 | 0.115 | 0.145 | 0.145 | 0.0625 | 0.0625 | | No log | 32.72 | 66 | 1.3489 | 0.6725 | 0.115 | 0.145 | 0.1475 | 0.0625 | 0.0625 | | No log | 33.72 | 68 | 1.3166 | 0.68 | 0.115 | 0.1475 | 0.1475 | 0.0625 | 0.0625 | | No log | 34.72 | 70 | 1.3028 | 0.685 | 0.1125 | 0.1475 | 0.1475 | 0.0625 | 0.0625 | | No log | 35.72 | 72 | 1.2779 | 0.6975 | 0.1125 | 0.1475 | 0.1475 | 0.0625 | 0.0625 | | No log | 36.72 | 74 | 1.2494 | 0.705 | 0.1125 | 0.1475 | 0.15 | 0.0625 | 0.0625 | | No log | 37.72 | 76 | 1.2366 | 0.7025 | 0.1125 | 0.1475 | 0.15 | 0.0625 | 0.0625 | | No log | 38.72 | 78 | 1.2214 | 0.705 | 0.1125 | 0.15 | 0.1525 | 0.0625 | 0.0625 | | No log | 39.72 | 80 | 1.1999 | 0.7175 | 0.1125 | 0.1525 | 0.1525 | 0.0625 | 0.0625 | | No log | 40.72 | 82 | 1.1793 | 0.7125 | 0.1125 | 0.1525 | 0.1575 | 0.0625 | 0.0625 | | No log | 41.72 | 84 | 1.1680 | 0.7225 | 0.1125 | 0.1525 | 0.1575 | 0.0625 | 0.0625 | | No log | 42.72 | 86 | 1.1625 | 0.7225 | 0.1125 | 0.1525 | 0.155 | 0.0625 | 0.0625 | | No log | 43.72 | 88 | 1.1471 | 0.7175 | 0.1125 | 0.1525 | 0.1575 | 0.0625 | 0.0625 | | No log | 44.72 | 90 | 1.1232 | 0.7275 | 0.1125 | 0.1525 | 0.1625 | 0.0625 | 0.0625 | | No log | 45.72 | 92 | 1.1188 | 0.7275 | 0.1125 | 0.1525 | 0.1625 | 0.0625 | 0.0625 | | No log | 46.72 | 94 | 1.1196 | 0.7275 | 0.1125 | 0.1525 | 0.1625 | 0.0625 | 0.0625 | | No log | 47.72 | 96 | 1.1133 | 0.725 | 0.1125 | 0.15 | 0.1625 | 0.0625 | 0.0625 | | No log | 48.72 | 98 | 1.1104 | 0.725 | 0.115 | 0.15 | 0.1625 | 0.0625 | 0.0625 | | No log | 49.72 | 100 | 1.1047 | 0.73 | 0.115 | 0.15 | 0.165 | 0.0625 | 0.0625 | | No log | 50.72 | 102 | 1.0973 | 0.7225 | 0.115 | 0.1525 | 0.17 | 0.0625 | 0.0625 | | No log | 51.72 | 104 | 1.0866 | 0.7225 | 0.115 | 0.1525 | 0.175 | 0.0625 | 0.0625 | | No log | 52.72 | 106 | 1.0845 | 0.73 | 0.1125 | 0.1525 | 0.1725 | 0.0625 | 0.0625 | | No log | 53.72 | 108 | 1.0836 | 0.7275 | 0.1125 | 0.1525 | 0.1725 | 0.0625 | 0.0625 | | No log | 54.72 | 110 | 1.0822 | 0.7225 | 0.1125 | 0.1525 | 0.1725 | 0.0625 | 0.0625 | | No log | 55.72 | 112 | 1.0808 | 0.7275 | 0.1125 | 0.1525 | 0.18 | 0.0625 | 0.0625 | | No log | 56.72 | 114 | 1.0766 | 0.725 | 0.1125 | 0.1525 | 0.18 | 0.0625 | 0.0625 | | No log | 57.72 | 116 | 1.0738 | 0.73 | 0.1125 | 0.1525 | 0.1825 | 0.0625 | 0.0625 | | No log | 58.72 | 118 | 1.0721 | 0.7275 | 0.1125 | 0.1525 | 0.185 | 0.0625 | 0.0625 | | No log | 59.72 | 120 | 1.0715 | 0.7275 | 0.1125 | 0.1525 | 0.185 | 0.0625 | 0.0625 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1.post200 - Datasets 2.9.0 - Tokenizers 0.13.2
tum-nlp/roberta-target-demographic-classifier
tum-nlp
2023-07-07T17:36:01Z
115
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "en", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-07T12:50:27Z
--- license: cc-by-4.0 language: - en pipeline_tag: text-classification --- # Target-Demographic Classifier <!-- Provide a quick summary of what the model is/does. --> The roBERTa-based target-demographic classifier is finetuned on the [CONAN](https://github.com/marcoguerini/CONAN) dataset for classifying whether a response's content is about one or multiple of the 8 target demographics, based on the topic classifier [cardiffnlp/tweet-topic-21-multi](https://huggingface.co/cardiffnlp/tweet-topic-21-multi) Currently trained for the following classes: ["MIGRANTS", "POC", "LGBT+", "MUSLIMS", "WOMEN", "JEWS", "other", "DISABLED"] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> The model is intended for classifying LM-generated dialogue responses, and evaluating their relevancy to the given input sequence.
cczhong/internlm-chat-7b-4bit-gptq
cczhong
2023-07-07T17:33:58Z
6
4
transformers
[ "transformers", "internlm", "feature-extraction", "custom_code", "region:us" ]
feature-extraction
2023-07-07T17:20:17Z
# how to use need install "pip install git+https://github.com/cczhong11/AutoGPTQ" before https://github.com/PanQiWei/AutoGPTQ/pull/189 got merged ``` from transformers import AutoTokenizer, AutoModelForCausalLM from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig quantized_model_dir = "cczhong/internlm-chat-7b-4bit-gptq" tokenizer = AutoTokenizer.from_pretrained(quantized_model_dir, trust_remote_code=True) model = AutoGPTQForCausalLM.from_quantized(quantized_model_dir, device="cuda:0",trust_remote_code=True) response, history = model.chat(tokenizer, "你好", history=[]) ```
tum-nlp/bert-counterspeech-classifier
tum-nlp
2023-07-07T17:32:40Z
63
0
transformers
[ "transformers", "tf", "bert", "text-classification", "en", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-07T12:48:44Z
--- license: cc-by-4.0 language: - en --- # Counter-Speech Classifier <!-- Provide a quick summary of what the model is/does. --> The BERT-based counter-speech classifier is finetuned on the [CONAN](https://github.com/marcoguerini/CONAN) dataset for classifying whether a response is counter-speech, based on the counter-argument classifier [ThinkCERCA/counterargument_hugging](https://huggingface.co/ThinkCERCA/counterargument_hugging) ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> The model is intended for classifying LM-generated dialogue responses, evaluating their validity as counter-speech.
tum-nlp/gpt-2-medium-target-aware-counterspeech-generation
tum-nlp
2023-07-07T17:27:41Z
143
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "counter speech", "en", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-07T12:45:07Z
--- license: cc-by-nc-4.0 language: - en pipeline_tag: text-generation tags: - counter speech --- --- # Target-Aware Counter-Speech Generation <!-- Provide a quick summary of what the model is/does. --> The target-aware counter-speech generation model is an autoregressive generative language model fine-tuned on hate- and counter-speech pairs from the [CONAN](https://github.com/marcoguerini/CONAN) datasets for generating more contextually relevant counter-speech, based on the [gpt2-medium](https://huggingface.co/gpt2-medium) model. The model utilizes special tokens that embedded target demographic information to guide the generation towards more relevant responses, avoiding off-topic and generic responses. The model is trained on 8 target demographics, including Migrants, People of Color (POC), LGBT+, Muslims, Women, Jews, Disabled, and Other. ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> The model is intended for generating counter-speech responses for a given hate speech sequence, combined with special tokens for target-demographic embeddings. ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> We observed negative effects such as content hallucination and toxic response generation. Though the intended use is to generate counter-speech for combating online hatred, the usage is to be monitored carefully with human post-editing or approval system, ensuring safe and inclusive online environment. ## How to Get Started with the Model Use the code below to get started with the model. types = ["MIGRANTS", "POC", "LGBT+", "MUSLIMS", "WOMEN", "JEWS", "other", "DISABLED"] # A list of all available target-demographic tokens from transformers import AutoTokenizer, AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained(tum-nlp/gpt-2-medium-target-aware-counterspeech-generation) tokenizer = AutoTokenizer.from_pretrained(tum-nlp/gpt-2-medium-target-aware-counterspeech-generation) tokenizer.padding_side = "left" prompt = "<|endoftext|> <other> Hate-speech: Human are not created equal, some are born lesser. Counter-speech: " input = tokenizer(prompt, return_tensors="pt", padding=True) output_sequences = model.generate( input_ids=inputs['input_ids'].to(model.device), attention_mask=inputs['attention_mask'].to(model.device), pad_token_id=tokenizer.eos_token_id, max_length=128, num_beams=3, no_repeat_ngram_size=3, num_return_sequences=1, early_stopping=True ) result = tokenizer.decode(output_sequences, skip_special_tokens=True) #### Training Hyperparameters training_args = TrainingArguments( num_train_epochs=20, learning_rate=3.800568576836524e-05, weight_decay=0.050977894796868116, warmup_ratio=0.10816909354342182, optim="adamw_torch", lr_scheduler_type="cosine", evaluation_strategy="epoch", save_strategy="epoch", save_total_limit=3, load_best_model_at_end=True, auto_find_batch_size=True, ) ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> The model's performance is tested on three test sets, from which two are subsets of the [CONAN](https://github.com/marcoguerini/CONAN) dataset and one is the sexist portion of the [EDOS](https://github.com/rewire-online/edos) dataset #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> The model's performance is tested on a custom evaluation pipeline for counter-speech generation. The pipeline includes CoLA, Toxicity, Hatefulness, Offensiveness, Label and Context Similarity, Validity as Counter-Speech, Repetition Rate, target-demographic F1 and the Arithmetic Mean ### Results CONAN | Model Name | CoLA |TOX | Hate | OFF | L Sim | C Sim | VaCS | RR | F1 | AM | | ---------- | ---- | -- | ---- | --- | ----- | ----- | ---- | -- | -- | -- | | Human | 0.937 | 0.955 | 1.000 | 0.997 | - | 0.751 | 0.980 | 0.861 | 0.885 | 0.929 | | target-aware gpt2-medium | 0.958 | 0.946 | 1.000 | 0.996 | 0.706 | 0.784 | 0.946 | 0.419 | 0.880 | 0.848 | CONAN SMALL | Model Name | CoLA |TOX | Hate | OFF | L Sim | C Sim | VaCS | RR | F1 | AM | | ---------- | ---- | -- | ---- | --- | ----- | ----- | ---- | -- | -- | -- | | Human | 0.963 | 0.956 | 1.000 | 1.000 | 1.000 | 0.768 | 0.988 | 0.995 | 0.868 | 0.949 | | target-aware gpt2-medium | 0.975 | 0.931 | 1.000 | 1.000 | 0.728 | 0.783 | 0.888 | 0.911 | 0.792 | 0.890 | EDOS | Model Name | CoLA |TOX | Hate | OFF | C Sim | VaCS | RR | F1 | AM | | ---------- | ---- | -- | ---- | --- | ----- | ---- | -- | -- | -- | | target-aware gpt2-medium | 0.930 | 0.815 | 0.999 | 0.975 | 0.689 | 0.857 | 0.518 | 0.747 | 0.816|
TheBloke/WizardLM-13B-V1-1-SuperHOT-8K-fp16
TheBloke
2023-07-07T17:15:13Z
1,563
4
transformers
[ "transformers", "pytorch", "llama", "text-generation", "custom_code", "arxiv:2304.12244", "license:other", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-07-07T17:12:09Z
--- inference: false license: other --- <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # WizardLM's WizardLM 13B V1.1 fp16 These are fp16 pytorch format model files for [WizardLM's WizardLM 13B V1.1](https://huggingface.co/WizardLM/WizardLM-13B-V1.1) merged with [Kaio Ken's SuperHOT 8K](https://huggingface.co/kaiokendev/superhot-13b-8k-no-rlhf-test). [Kaio Ken's SuperHOT 13b LoRA](https://huggingface.co/kaiokendev/superhot-13b-8k-no-rlhf-test) is merged on to the base model, and then 8K context can be achieved during inference by using `trust_remote_code=True`. Note that `config.json` has been set to a sequence length of 8192. This can be modified to 4096 if you want to try with a smaller sequence length. ## Repositories available * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/WizardLM-13B-V1-1-SuperHOT-8K-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/WizardLM-13B-V1-1-SuperHOT-8K-GGML) * [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/WizardLM-13B-V1-1-SuperHOT-8K-fp16) * [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/WizardLM/WizardLM-13B-V1.1) ## How to use this model from Python code First make sure you have Einops installed: ``` pip3 install auto-gptq ``` Then run the following code. `config.json` has been default to a sequence length of 8192, but you can also configure this in your Python code. The provided modelling code, activated with `trust_remote_code=True` will automatically set the `scale` parameter from the configured `max_position_embeddings`. Eg for 8192, `scale` is set to `4`. ```python from transformers import AutoConfig, AutoTokenizer, AutoModelForCausalLM, pipeline import argparse model_name_or_path = "TheBloke/WizardLM-13B-V1-1-SuperHOT-8K-fp16" use_triton = False tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) config = AutoConfig.from_pretrained(model_name_or_path, trust_remote_code=True) # Change this to the sequence length you want config.max_position_embeddings = 8192 model = AutoModelForCausalLM.from_pretrained(model_name_or_path, config=config, trust_remote_code=True, device_map='auto') # Note: check to confirm if this is correct prompt template is correct for this model! prompt = "Tell me about AI" prompt_template=f'''USER: {prompt} ASSISTANT:''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, temperature=0.7, top_p=0.95, repetition_penalty=1.15 ) print(pipe(prompt_template)[0]['generated_text']) ``` ## Using other UIs: monkey patch Provided in the repo is `llama_rope_scaled_monkey_patch.py`, written by @kaiokendev. It can be theoretically be added to any Python UI or custom code to enable the same result as `trust_remote_code=True`. I have not tested this, and it should be superseded by using `trust_remote_code=True`, but I include it for completeness and for interest. <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz. **Patreon special mentions**: RoA, Lone Striker, Gabriel Puliatti, Derek Yates, Randy H, Jonathan Leane, Eugene Pentland, Karl Bernard, Viktor Bowallius, senxiiz, Daniel P. Andersen, Pierre Kircher, Deep Realms, Cory Kujawski, Oscar Rangel, Fen Risland, Ajan Kanaga, LangChain4j, webtim, Nikolai Manek, Trenton Dambrowitz, Raven Klaugh, Kalila, Khalefa Al-Ahmad, Chris McCloskey, Luke @flexchar, Ai Maven, Dave, Asp the Wyvern, Sean Connelly, Imad Khwaja, Space Cruiser, Rainer Wilmers, subjectnull, Alps Aficionado, Willian Hasse, Fred von Graf, Artur Olbinski, Johann-Peter Hartmann, WelcomeToTheClub, Willem Michiel, Michael Levine, Iucharbius , Spiking Neurons AB, K, biorpg, John Villwock, Pyrater, Greatston Gnanesh, Mano Prime, Junyu Yang, Stephen Murray, John Detwiler, Luke Pendergrass, terasurfer , Pieter, zynix , Edmond Seymore, theTransient, Nathan LeClaire, vamX, Kevin Schuppel, Preetika Verma, ya boyyy, Alex , SuperWojo, Ghost , Joseph William Delisle, Matthew Berman, Talal Aujan, chris gileta, Illia Dulskyi. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: Kaio Ken's SuperHOT 8K ### SuperHOT Prototype 2 w/ 8K Context This is a second prototype of SuperHOT, a NSFW focused LoRA, this time 7B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k). #### Looking for Merged & Quantized Models? Make some please :) #### Using the monkey-patch? You will **NEED** to **apply the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192** The monkeypatch is only necessary if you are using a front-end/back-end that does not already support scaling and said front-end/back-end is Python-based (i.e. Huggingface Transformers). To apply the patch, you will need to copy the `llama_rope_scaled_monkey_patch.py` into your working directory and call the exported function `replace_llama_rope_with_scaled_rope` at the very start of your Python program. It will modify the Transformers library's implementation of RoPE to properly apply the scaling factor. #### Using Oobabooga with Exllama? Switch your loader to `exllama` or `exllama_hf` Add the arguments `max_seq_len 8192` and `compress_pos_emb 4`. **While the model may work well with `compress_pos_emb 2`, it was trained on 4, so that is what I advocate for you to use** Example in the command-line: - `python server.py --max_seq_len 8192 --compress_pos_emb 4 --loader exllama_hf` In the UI, you will see the loader option in the `Models` tab. Once you select either `exllama` or `exllama_hf`, the `max_seq_len` and `compress_pos_emb` settings will appear. #### Training Details I trained the LoRA with the following configuration: - 1200 samples (~400 samples over 2048 sequence length) - learning rate of 3e-4 - 3 epochs - The exported modules are: - q_proj - k_proj - v_proj - o_proj - no bias - Rank = 4 - Alpha = 8 - no dropout - weight decay of 0.1 - AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5 - Trained on 4-bit base model - Cutoff length: 4096 # Original model card: WizardLM's WizardLM 13B V1.1 This is the **Full-Weight** of WizardLM-13B V1.1 model. **Repository**: https://github.com/nlpxucan/WizardLM **Twitter**: https://twitter.com/WizardLM_AI/status/1677282955490918401 - 🔥🔥🔥 [7/7/2023] We released **WizardLM V1.1** models. The **WizardLM-13B-V1.1** is here ([Demo_13B-V1.1](https://e8a06366ccd1c4d1.gradio.app), [Demo_13B-V1.1_bak-1](https://59da107262a25764.gradio.app), [Demo_13B-V1.1_bak-2](https://dfc5113f66739c80.gradio.app), [Full Model Weight](https://huggingface.co/WizardLM/WizardLM-13B-V1.1)). **WizardLM-7B-V1.1**, **WizardLM-30B-V1.1**, and **WizardLM-65B-V1.1** are coming soon. Please checkout the [Full Model Weights](https://huggingface.co/WizardLM) and [paper](https://arxiv.org/abs/2304.12244). - 🔥🔥🔥 [7/7/2023] The **WizardLM-13B-V1.1** achieves **6.74** on [MT-Bench Leaderboard](https://chat.lmsys.org/?leaderboard), **86.32%** on [AlpacaEval Leaderboard](https://tatsu-lab.github.io/alpaca_eval/), and **99.3%** on [WizardLM Eval](https://github.com/nlpxucan/WizardLM/blob/main/WizardLM/data/WizardLM_testset.jsonl). (Note: MT-Bench and AlpacaEval are all self-test, will push update and request review. All tests are completed under their official settings.)
Multi-Domain-Expert-Learning/vietnamese-pythia-3b-deduped
Multi-Domain-Expert-Learning
2023-07-07T17:12:05Z
12
0
transformers
[ "transformers", "pytorch", "gpt_neox", "text-generation", "generated_from_trainer", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-07T17:01:55Z
--- tags: - generated_from_trainer datasets: - /pfs/lustrep4/scratch/project_462000259/shared_datasets/modified_200/modified_200/ model-index: - name: layer_16,17,18,19,20 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # layer_16,17,18,19,20 This model is a fine-tuned version of [/pfs/lustrep4/scratch/project_462000259/shared_models/pythia-2.8b-deduped-base/pythia-2.8b-deduped](https://huggingface.co//pfs/lustrep4/scratch/project_462000259/shared_models/pythia-2.8b-deduped-base/pythia-2.8b-deduped) on the /pfs/lustrep4/scratch/project_462000259/shared_datasets/modified_200/modified_200/ dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 64 - total_train_batch_size: 128 - total_eval_batch_size: 128 - optimizer: Adam with betas=(0.9,0.99) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.1+rocm5.4.2 - Datasets 2.11.0 - Tokenizers 0.13.3
tyavika/LR1E4-BS8-Distilbert-QA-Pytorch-FULL
tyavika
2023-07-07T17:00:19Z
105
0
transformers
[ "transformers", "pytorch", "distilbert", "question-answering", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-07-07T14:30:47Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: LR1E4-BS8-Distilbert-QA-Pytorch-FULL results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # LR1E4-BS8-Distilbert-QA-Pytorch-FULL This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7269 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.4302 | 1.0 | 3290 | 1.2902 | | 1.0058 | 2.0 | 6580 | 1.2750 | | 0.6711 | 3.0 | 9870 | 1.4631 | | 0.4224 | 4.0 | 13160 | 1.7269 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
camus-ng/dreambooth_cory_v20_prl_ten
camus-ng
2023-07-07T16:48:11Z
1
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "dreambooth", "base_model:stabilityai/stable-diffusion-2", "base_model:finetune:stabilityai/stable-diffusion-2", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-06T14:34:39Z
--- license: creativeml-openrail-m base_model: stabilityai/stable-diffusion-2 instance_prompt: a photo of a <ntvc> man tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - dreambooth inference: true --- # DreamBooth - camus-ng/dreambooth_cory_v20_prl_ten This is a dreambooth model derived from stabilityai/stable-diffusion-2. The weights were trained on a photo of a <ntvc> man using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: True.
paripi/Malishka
paripi
2023-07-07T16:44:13Z
137
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-06-13T15:16:49Z
--- tags: - conversational --- #chatbot dataset: https://www.kaggle.com/datasets/projjal1/human-conversation-training-data
jordyvl/LayoutLMv3_jordyvl_rvl_cdip_100_examples_per_class_2023-07-07_baseline
jordyvl
2023-07-07T16:43:18Z
103
0
transformers
[ "transformers", "pytorch", "layoutlmv3", "text-classification", "generated_from_trainer", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-07T13:25:31Z
--- license: cc-by-nc-sa-4.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: LayoutLMv3_jordyvl_rvl_cdip_100_examples_per_class_2023-07-07_baseline results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # LayoutLMv3_jordyvl_rvl_cdip_100_examples_per_class_2023-07-07_baseline This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.9950 - Accuracy: 0.78 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 6 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 24 - total_train_batch_size: 144 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 60 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.9 | 5 | 2.7384 | 0.105 | | No log | 1.9 | 10 | 2.6717 | 0.14 | | No log | 2.9 | 15 | 2.6154 | 0.21 | | No log | 3.9 | 20 | 2.5111 | 0.245 | | No log | 4.9 | 25 | 2.4349 | 0.2575 | | No log | 5.9 | 30 | 2.3158 | 0.3 | | No log | 6.9 | 35 | 2.2286 | 0.335 | | No log | 7.9 | 40 | 2.1051 | 0.3675 | | No log | 8.9 | 45 | 2.0340 | 0.41 | | No log | 9.9 | 50 | 1.8834 | 0.495 | | No log | 10.9 | 55 | 1.7616 | 0.5275 | | No log | 11.9 | 60 | 1.6547 | 0.5575 | | No log | 12.9 | 65 | 1.5398 | 0.585 | | No log | 13.9 | 70 | 1.4345 | 0.615 | | No log | 14.9 | 75 | 1.3810 | 0.63 | | No log | 15.9 | 80 | 1.2689 | 0.685 | | No log | 16.9 | 85 | 1.2218 | 0.6625 | | No log | 17.9 | 90 | 1.1964 | 0.6825 | | No log | 18.9 | 95 | 1.1421 | 0.6875 | | No log | 19.9 | 100 | 1.1136 | 0.71 | | No log | 20.9 | 105 | 1.0863 | 0.715 | | No log | 21.9 | 110 | 1.0472 | 0.7075 | | No log | 22.9 | 115 | 1.0367 | 0.7375 | | No log | 23.9 | 120 | 1.0132 | 0.7175 | | No log | 24.9 | 125 | 0.9760 | 0.7375 | | No log | 25.9 | 130 | 0.9697 | 0.7275 | | No log | 26.9 | 135 | 0.9621 | 0.7375 | | No log | 27.9 | 140 | 0.9532 | 0.745 | | No log | 28.9 | 145 | 0.9258 | 0.7475 | | No log | 29.9 | 150 | 0.9703 | 0.7475 | | No log | 30.9 | 155 | 0.9199 | 0.765 | | No log | 31.9 | 160 | 0.9678 | 0.745 | | No log | 32.9 | 165 | 0.9110 | 0.7675 | | No log | 33.9 | 170 | 0.9723 | 0.755 | | No log | 34.9 | 175 | 0.9083 | 0.78 | | No log | 35.9 | 180 | 0.9427 | 0.76 | | No log | 36.9 | 185 | 0.9301 | 0.77 | | No log | 37.9 | 190 | 0.9318 | 0.765 | | No log | 38.9 | 195 | 0.9486 | 0.77 | | No log | 39.9 | 200 | 0.9676 | 0.755 | | No log | 40.9 | 205 | 0.9586 | 0.7675 | | No log | 41.9 | 210 | 0.9516 | 0.7625 | | No log | 42.9 | 215 | 0.9796 | 0.7625 | | No log | 43.9 | 220 | 0.9764 | 0.77 | | No log | 44.9 | 225 | 0.9704 | 0.7675 | | No log | 45.9 | 230 | 0.9842 | 0.775 | | No log | 46.9 | 235 | 1.0011 | 0.7625 | | No log | 47.9 | 240 | 0.9978 | 0.7625 | | No log | 48.9 | 245 | 0.9873 | 0.775 | | No log | 49.9 | 250 | 0.9848 | 0.7825 | | No log | 50.9 | 255 | 0.9857 | 0.7775 | | No log | 51.9 | 260 | 0.9975 | 0.775 | | No log | 52.9 | 265 | 0.9933 | 0.78 | | No log | 53.9 | 270 | 0.9840 | 0.78 | | No log | 54.9 | 275 | 0.9814 | 0.78 | | No log | 55.9 | 280 | 0.9860 | 0.775 | | No log | 56.9 | 285 | 0.9922 | 0.78 | | No log | 57.9 | 290 | 0.9949 | 0.78 | | No log | 58.9 | 295 | 0.9953 | 0.78 | | No log | 59.9 | 300 | 0.9950 | 0.78 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1.post200 - Datasets 2.9.0 - Tokenizers 0.13.2
idealflaw/q-Taxi-v3
idealflaw
2023-07-07T16:40:14Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-07T11:11:30Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="idealflaw/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
steb6/textual_inversion_cat
steb6
2023-07-07T16:29:27Z
6
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "textual_inversion", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-07T09:33:17Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - textual_inversion inference: true --- # Textual inversion text2image fine-tuning - steb6/textual_inversion_cat These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
TheBloke/Guanaco-33B-SuperHOT-8K-GGML
TheBloke
2023-07-07T16:16:10Z
0
9
null
[ "license:other", "region:us" ]
null
2023-07-07T15:40:26Z
--- inference: false license: other --- <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # Tim Dettmers' Guanaco 33B GGML These files are GGML format model files for [Tim Dettmers' Guanaco 33B](https://huggingface.co/timdettmers/guanaco-33b-merged). These are SuperHOT GGMLs with an increased context length. SuperHOT is a new system that employs RoPE to expand context beyond what was originally possible for a model. It was discovered and developed by [kaiokendev](https://huggingface.co/kaiokendev). In order to use the increased context length, you can presently use: * [KoboldCpp](https://github.com/LostRuins/koboldcpp) - [release 1.33](https://github.com/LostRuins/koboldcpp/releases/tag/v1.33) or later. Support is also expected to come to llama.cpp, however work is still being done to find the optimal implementation. To use the increased context with KoboldCpp, simply use `--contextsize` to set the desired context, eg `--contextsize 4096` or `--contextsize 8192`. **NOTE**: Increased context length is an area seeing rapid developments and improvements. It is quite possible that these models may be superseded by new developments in the coming days. If that's the case, I will remove them, or update this README as appropriate. ## Repositories available * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Guanaco-33B-SuperHOT-8K-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/Guanaco-33B-SuperHOT-8K-GGML) * [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Guanaco-33B-SuperHOT-8K-fp16) * [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/timdettmers/guanaco-33b-merged) <!-- compatibility_ggml start --> ## Compatibility These GGMLs will work with any llama.cpp-compatible GGML client that supports k-quants. However the increased context length won't work without specific support. See the note in the introduction for details on using increased context. ## Explanation of the new k-quant methods The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw * GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type. Refer to the Provided Files table below to see what files use which methods, and how. <!-- compatibility_ggml end --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | guanaco-33b-superhot-8k.ggmlv3.q2_K.bin | q2_K | 2 | 13.71 GB | 16.21 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. | | guanaco-33b-superhot-8k.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 17.28 GB | 19.78 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K | | guanaco-33b-superhot-8k.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 15.72 GB | 18.22 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K | | guanaco-33b-superhot-8k.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 14.06 GB | 16.56 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors | | guanaco-33b-superhot-8k.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 19.62 GB | 22.12 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K | | guanaco-33b-superhot-8k.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 18.36 GB | 20.86 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors | | guanaco-33b-superhot-8k.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 23.05 GB | 25.55 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K | | guanaco-33b-superhot-8k.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 22.40 GB | 24.90 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors | | guanaco-33b-superhot-8k.ggmlv3.q6_K.bin | q6_K | 6 | 26.69 GB | 29.19 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. ## How to run in `koboldcpp` On Linux I use the following command line to launch the KoboldCpp UI with OpenCL aceleration and a context size of 4096: ``` python ./koboldcpp.py --stream --unbantokens --threads 8 --usecublas --gpulayers 100 guanaco-33b-superhot-8k.ggmlv3.q4_K_M.bin ``` Change `--gpulayers 100` to the number of layers you want/are able to offload to the GPU. Remove it if you don't have GPU acceleration. For OpenCL acceleration, change `--usecublas` to `--useclblast 0 0`. You may need to change the second `0` to `1` if you have both an iGPU and a discrete GPU. <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz. **Patreon special mentions**: RoA, Lone Striker, Gabriel Puliatti, Derek Yates, Randy H, Jonathan Leane, Eugene Pentland, Karl Bernard, Viktor Bowallius, senxiiz, Daniel P. Andersen, Pierre Kircher, Deep Realms, Cory Kujawski, Oscar Rangel, Fen Risland, Ajan Kanaga, LangChain4j, webtim, Nikolai Manek, Trenton Dambrowitz, Raven Klaugh, Kalila, Khalefa Al-Ahmad, Chris McCloskey, Luke @flexchar, Ai Maven, Dave, Asp the Wyvern, Sean Connelly, Imad Khwaja, Space Cruiser, Rainer Wilmers, subjectnull, Alps Aficionado, Willian Hasse, Fred von Graf, Artur Olbinski, Johann-Peter Hartmann, WelcomeToTheClub, Willem Michiel, Michael Levine, Iucharbius , Spiking Neurons AB, K, biorpg, John Villwock, Pyrater, Greatston Gnanesh, Mano Prime, Junyu Yang, Stephen Murray, John Detwiler, Luke Pendergrass, terasurfer , Pieter, zynix , Edmond Seymore, theTransient, Nathan LeClaire, vamX, Kevin Schuppel, Preetika Verma, ya boyyy, Alex , SuperWojo, Ghost , Joseph William Delisle, Matthew Berman, Talal Aujan, chris gileta, Illia Dulskyi. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: Kaio Ken's SuperHOT 8K ### SuperHOT Prototype 2 w/ 8K Context This is a second prototype of SuperHOT, a NSFW focused LoRA, this time 7B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k). #### Looking for Merged & Quantized Models? Make some please :) #### Using the monkey-patch? You will **NEED** to **apply the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192** The monkeypatch is only necessary if you are using a front-end/back-end that does not already support scaling and said front-end/back-end is Python-based (i.e. Huggingface Transformers). To apply the patch, you will need to copy the `llama_rope_scaled_monkey_patch.py` into your working directory and call the exported function `replace_llama_rope_with_scaled_rope` at the very start of your Python program. It will modify the Transformers library's implementation of RoPE to properly apply the scaling factor. #### Using Oobabooga with Exllama? Switch your loader to `exllama` or `exllama_hf` Add the arguments `max_seq_len 8192` and `compress_pos_emb 4`. **While the model may work well with `compress_pos_emb 2`, it was trained on 4, so that is what I advocate for you to use** Example in the command-line: - `python server.py --max_seq_len 8192 --compress_pos_emb 4 --loader exllama_hf` In the UI, you will see the loader option in the `Models` tab. Once you select either `exllama` or `exllama_hf`, the `max_seq_len` and `compress_pos_emb` settings will appear. #### Training Details I trained the LoRA with the following configuration: - 1200 samples (~400 samples over 2048 sequence length) - learning rate of 3e-4 - 3 epochs - The exported modules are: - q_proj - k_proj - v_proj - o_proj - no bias - Rank = 4 - Alpha = 8 - no dropout - weight decay of 0.1 - AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5 - Trained on 4-bit base model - Cutoff length: 4096 # Original model card: Tim Dettmers' Guanaco 33B No original model card was provided.
mitra-mir/setfit_model_Independence_labelintl_epochs2
mitra-mir
2023-07-07T16:11:15Z
3
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "feature-extraction", "sentence-similarity", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-07-07T16:11:02Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 177 with parameters: ``` {'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 2, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 354, "warmup_steps": 36, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
SarielSinLuo/bert-large-uncased-finetuned-stsb
SarielSinLuo
2023-07-07T16:09:15Z
105
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-07T16:06:28Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - spearmanr model-index: - name: bert-large-uncased-finetuned-stsb results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue config: stsb split: validation args: stsb metrics: - name: Spearmanr type: spearmanr value: 0.8958757999348048 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-large-uncased-finetuned-stsb This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.4441 - Pearson: 0.8992 - Spearmanr: 0.8959 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | |:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:| | No log | 1.0 | 360 | 0.5527 | 0.8929 | 0.8932 | | 0.7164 | 2.0 | 720 | 0.4669 | 0.8965 | 0.8952 | | 0.2827 | 3.0 | 1080 | 0.4438 | 0.8964 | 0.8938 | | 0.2827 | 4.0 | 1440 | 0.4648 | 0.8978 | 0.8956 | | 0.1529 | 5.0 | 1800 | 0.4441 | 0.8992 | 0.8959 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
JoshELambert/markets
JoshELambert
2023-07-07T15:53:15Z
6
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
2023-07-07T15:12:22Z
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # /var/folders/8x/qp375g154zg3h3ktpt_8tyqw0000gn/T/tmp44tdhqys/JoshELambert/markets This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("/var/folders/8x/qp375g154zg3h3ktpt_8tyqw0000gn/T/tmp44tdhqys/JoshELambert/markets") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
mrcmoresi/space_invaders
mrcmoresi
2023-07-07T15:52:19Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-07T14:00:55Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 602.50 +/- 123.60 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mrcmoresi -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mrcmoresi -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga mrcmoresi ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
TheBloke/Guanaco-33B-SuperHOT-8K-fp16
TheBloke
2023-07-07T15:46:12Z
16
4
transformers
[ "transformers", "pytorch", "llama", "text-generation", "custom_code", "license:other", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-07-07T15:40:27Z
--- inference: false license: other --- <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # Tim Dettmers' Guanaco 33B fp16 These are fp16 pytorch format model files for [Tim Dettmers' Guanaco 33B](https://huggingface.co/timdettmers/guanaco-33b-merged) merged with [Kaio Ken's SuperHOT 8K](https://huggingface.co/kaiokendev/superhot-33b-8k-no-rlhf-test). [Kaio Ken's SuperHOT 33b LoRA](https://huggingface.co/kaiokendev/superhot-33b-8k-no-rlhf-test) is merged on to the base model, and then 8K context can be achieved during inference by using `trust_remote_code=True`. Note that `config.json` has been set to a sequence length of 8192. This can be modified to 4096 if you want to try with a smaller sequence length. ## Repositories available * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Guanaco-33B-SuperHOT-8K-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/Guanaco-33B-SuperHOT-8K-GGML) * [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Guanaco-33B-SuperHOT-8K-fp16) * [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/timdettmers/guanaco-33b-merged) ## How to use this model from Python code First make sure you have Einops installed: ``` pip3 install auto-gptq ``` Then run the following code. `config.json` has been default to a sequence length of 8192, but you can also configure this in your Python code. The provided modelling code, activated with `trust_remote_code=True` will automatically set the `scale` parameter from the configured `max_position_embeddings`. Eg for 8192, `scale` is set to `4`. ```python from transformers import AutoConfig, AutoTokenizer, AutoModelForCausalLM, pipeline import argparse model_name_or_path = "TheBloke/Guanaco-33B-SuperHOT-8K-fp16" use_triton = False tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) config = AutoConfig.from_pretrained(model_name_or_path, trust_remote_code=True) # Change this to the sequence length you want config.max_position_embeddings = 8192 model = AutoModelForCausalLM.from_pretrained(model_name_or_path, config=config, trust_remote_code=True, device_map='auto') # Note: check to confirm if this is correct prompt template is correct for this model! prompt = "Tell me about AI" prompt_template=f'''USER: {prompt} ASSISTANT:''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, temperature=0.7, top_p=0.95, repetition_penalty=1.15 ) print(pipe(prompt_template)[0]['generated_text']) ``` ## Using other UIs: monkey patch Provided in the repo is `llama_rope_scaled_monkey_patch.py`, written by @kaiokendev. It can be theoretically be added to any Python UI or custom code to enable the same result as `trust_remote_code=True`. I have not tested this, and it should be superseded by using `trust_remote_code=True`, but I include it for completeness and for interest. <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz. **Patreon special mentions**: RoA, Lone Striker, Gabriel Puliatti, Derek Yates, Randy H, Jonathan Leane, Eugene Pentland, Karl Bernard, Viktor Bowallius, senxiiz, Daniel P. Andersen, Pierre Kircher, Deep Realms, Cory Kujawski, Oscar Rangel, Fen Risland, Ajan Kanaga, LangChain4j, webtim, Nikolai Manek, Trenton Dambrowitz, Raven Klaugh, Kalila, Khalefa Al-Ahmad, Chris McCloskey, Luke @flexchar, Ai Maven, Dave, Asp the Wyvern, Sean Connelly, Imad Khwaja, Space Cruiser, Rainer Wilmers, subjectnull, Alps Aficionado, Willian Hasse, Fred von Graf, Artur Olbinski, Johann-Peter Hartmann, WelcomeToTheClub, Willem Michiel, Michael Levine, Iucharbius , Spiking Neurons AB, K, biorpg, John Villwock, Pyrater, Greatston Gnanesh, Mano Prime, Junyu Yang, Stephen Murray, John Detwiler, Luke Pendergrass, terasurfer , Pieter, zynix , Edmond Seymore, theTransient, Nathan LeClaire, vamX, Kevin Schuppel, Preetika Verma, ya boyyy, Alex , SuperWojo, Ghost , Joseph William Delisle, Matthew Berman, Talal Aujan, chris gileta, Illia Dulskyi. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: Kaio Ken's SuperHOT 8K ### SuperHOT Prototype 2 w/ 8K Context This is a second prototype of SuperHOT, a NSFW focused LoRA, this time 7B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k). #### Looking for Merged & Quantized Models? Make some please :) #### Using the monkey-patch? You will **NEED** to **apply the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192** The monkeypatch is only necessary if you are using a front-end/back-end that does not already support scaling and said front-end/back-end is Python-based (i.e. Huggingface Transformers). To apply the patch, you will need to copy the `llama_rope_scaled_monkey_patch.py` into your working directory and call the exported function `replace_llama_rope_with_scaled_rope` at the very start of your Python program. It will modify the Transformers library's implementation of RoPE to properly apply the scaling factor. #### Using Oobabooga with Exllama? Switch your loader to `exllama` or `exllama_hf` Add the arguments `max_seq_len 8192` and `compress_pos_emb 4`. **While the model may work well with `compress_pos_emb 2`, it was trained on 4, so that is what I advocate for you to use** Example in the command-line: - `python server.py --max_seq_len 8192 --compress_pos_emb 4 --loader exllama_hf` In the UI, you will see the loader option in the `Models` tab. Once you select either `exllama` or `exllama_hf`, the `max_seq_len` and `compress_pos_emb` settings will appear. #### Training Details I trained the LoRA with the following configuration: - 1200 samples (~400 samples over 2048 sequence length) - learning rate of 3e-4 - 3 epochs - The exported modules are: - q_proj - k_proj - v_proj - o_proj - no bias - Rank = 4 - Alpha = 8 - no dropout - weight decay of 0.1 - AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5 - Trained on 4-bit base model - Cutoff length: 4096 # Original model card: Tim Dettmers' Guanaco 33B No original model card was provided.
NasimB/gpt2-concat-cbt-rarity-all-4p5k-p3k
NasimB
2023-07-07T15:38:55Z
10
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-07T12:24:00Z
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: gpt2-concat-cbt-rarity-all-4p5k-p3k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-concat-cbt-rarity-all-4p5k-p3k This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 3.1891 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 6.7192 | 0.29 | 500 | 5.6402 | | 5.37 | 0.58 | 1000 | 5.2012 | | 5.0291 | 0.88 | 1500 | 4.9442 | | 4.7507 | 1.17 | 2000 | 4.7965 | | 4.5833 | 1.46 | 2500 | 4.6742 | | 4.4772 | 1.75 | 3000 | 4.5664 | | 4.3569 | 2.04 | 3500 | 4.4890 | | 4.1616 | 2.34 | 4000 | 4.4396 | | 4.1266 | 2.63 | 4500 | 4.3837 | | 4.0807 | 2.92 | 5000 | 4.3330 | | 3.8821 | 3.21 | 5500 | 4.3276 | | 3.8204 | 3.5 | 6000 | 4.2971 | | 3.8083 | 3.8 | 6500 | 4.2614 | | 3.712 | 4.09 | 7000 | 4.2583 | | 3.5343 | 4.38 | 7500 | 4.2532 | | 3.5334 | 4.67 | 8000 | 4.2382 | | 3.5216 | 4.96 | 8500 | 4.2254 | | 3.3634 | 5.26 | 9000 | 4.2379 | | 3.3443 | 5.55 | 9500 | 4.2367 | | 3.3402 | 5.84 | 10000 | 4.2359 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
daloopa/tatr-dataset-1000-500epochs
daloopa
2023-07-07T15:25:54Z
192
0
transformers
[ "transformers", "pytorch", "tensorboard", "table-transformer", "object-detection", "generated_from_trainer", "license:mit", "endpoints_compatible", "region:us" ]
object-detection
2023-07-07T10:06:30Z
--- license: mit tags: - generated_from_trainer model-index: - name: tatr-dataset-1000-500epochs results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tatr-dataset-1000-500epochs This model is a fine-tuned version of [microsoft/table-transformer-structure-recognition](https://huggingface.co/microsoft/table-transformer-structure-recognition) on the None dataset. It achieves the following results on the evaluation set: - eval_loss: 0.7819 - eval_runtime: 10.4713 - eval_samples_per_second: 13.943 - eval_steps_per_second: 1.814 - epoch: 243.23 - step: 6324 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 500 ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
Grytpipe/eeunet
Grytpipe
2023-07-07T15:21:39Z
0
0
null
[ "license:unknown", "region:us" ]
null
2023-07-07T14:09:52Z
--- license: unknown --- # Symantic Segmentation of GEE High Resolution Imagery Fully convolutional neural networks (FCNs) are commonly used for semantic image segmentation, essentially the assignment of every pixel in an image to one of two or more categories. In this notebook we examine a popular FCN architecture, called UNet to perform a specific semantic segmentation task, namely urban building recognition: the identification within an arbitrarily complex remote sensing image of houses, schools, commercial edifices, etc.
JacquesVlaming/distilgpt2-finetuned-wikitext2
JacquesVlaming
2023-07-07T15:21:18Z
203
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "dataset:wikitext", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-07T05:53:08Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - wikitext model-index: - name: distilgpt2-finetuned-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilgpt2-finetuned-wikitext2 This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the wikitext dataset. It achieves the following results on the evaluation set: - Loss: 3.6441 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.7667 | 1.0 | 2334 | 3.6684 | | 3.6383 | 2.0 | 4668 | 3.6468 | | 3.5906 | 3.0 | 7002 | 3.6441 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu117 - Datasets 2.13.0 - Tokenizers 0.13.3