modelId
stringlengths
5
138
author
stringlengths
2
42
last_modified
unknowndate
2020-02-15 11:33:14
2025-04-11 00:38:10
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
420 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
unknowndate
2022-03-02 23:29:04
2025-04-11 00:36:44
card
stringlengths
11
1.01M
duyphu/4b51a78f-59ff-4172-a304-5d8152a8b9f8
duyphu
"2025-01-20T08:20:58Z"
9
0
peft
[ "peft", "safetensors", "gemma", "axolotl", "generated_from_trainer", "base_model:unsloth/gemma-7b-it", "base_model:adapter:unsloth/gemma-7b-it", "license:apache-2.0", "region:us" ]
null
"2025-01-20T07:42:11Z"
--- library_name: peft license: apache-2.0 base_model: unsloth/gemma-7b-it tags: - axolotl - generated_from_trainer model-index: - name: 4b51a78f-59ff-4172-a304-5d8152a8b9f8 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/gemma-7b-it bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 278f31eb80c9fbf2_train_data.json ds_type: json format: custom path: /workspace/input_data/278f31eb80c9fbf2_train_data.json type: field_input: premise field_instruction: promptID field_output: hypothesis format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 5 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: duyphu/4b51a78f-59ff-4172-a304-5d8152a8b9f8 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 5 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/278f31eb80c9fbf2_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 3aa7cb98-4748-423e-b27a-362fa59de52f wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 3aa7cb98-4748-423e-b27a-362fa59de52f warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 4b51a78f-59ff-4172-a304-5d8152a8b9f8 This model is a fine-tuned version of [unsloth/gemma-7b-it](https://huggingface.co/unsloth/gemma-7b-it) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.1427 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0001 | 1 | 4.7996 | | 3.8619 | 0.0014 | 10 | 3.0943 | | 2.5086 | 0.0027 | 20 | 2.4348 | | 2.4236 | 0.0041 | 30 | 2.2401 | | 2.4682 | 0.0054 | 40 | 2.1544 | | 1.9709 | 0.0068 | 50 | 2.1427 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
lesso01/cea1b498-df82-44e0-8014-8cd567dfacd6
lesso01
"2025-01-27T04:16:34Z"
6
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/SmolLM-135M-Instruct", "base_model:adapter:unsloth/SmolLM-135M-Instruct", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
"2025-01-27T04:07:15Z"
--- library_name: peft license: apache-2.0 base_model: unsloth/SmolLM-135M-Instruct tags: - axolotl - generated_from_trainer model-index: - name: cea1b498-df82-44e0-8014-8cd567dfacd6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/SmolLM-135M-Instruct bf16: true chat_template: llama3 datasets: - data_files: - f1d3370e327a92ea_train_data.json ds_type: json format: custom path: /workspace/input_data/f1d3370e327a92ea_train_data.json type: field_input: wrong_code field_instruction: problem_description field_output: acc_code format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: 2 eval_max_new_tokens: 128 eval_steps: 5 eval_table_size: null flash_attention: false fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: lesso01/cea1b498-df82-44e0-8014-8cd567dfacd6 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 25 micro_batch_size: 2 mlflow_experiment_name: /tmp/f1d3370e327a92ea_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 10 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 804f2318-3724-43a6-9d33-0d5a431313fd wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 804f2318-3724-43a6-9d33-0d5a431313fd warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # cea1b498-df82-44e0-8014-8cd567dfacd6 This model is a fine-tuned version of [unsloth/SmolLM-135M-Instruct](https://huggingface.co/unsloth/SmolLM-135M-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 25 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.0 | 0.0003 | 1 | nan | | 0.0 | 0.0016 | 5 | nan | | 0.0 | 0.0032 | 10 | nan | | 0.0 | 0.0048 | 15 | nan | | 0.0 | 0.0064 | 20 | nan | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
jossthebos/fine_tuned_ollama_3B_V1
jossthebos
"2025-04-10T11:44:01Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-04-10T11:38:53Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
lesso16/2a4e8abf-936b-4d07-9dec-cf0ef52a43df
lesso16
"2025-03-30T21:49:56Z"
0
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:Qwen/Qwen1.5-0.5B", "base_model:adapter:Qwen/Qwen1.5-0.5B", "license:other", "region:us" ]
null
"2025-03-30T21:26:12Z"
<!DOCTYPE html> <html class="" lang="en"> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no" /> <meta name="description" content="We're on a journey to advance and democratize artificial intelligence through open source and open science." /> <meta property="fb:app_id" content="1321688464574422" /> <meta name="twitter:card" content="summary_large_image" /> <meta name="twitter:site" content="@huggingface" /> <meta property="og:title" content="Hugging Face - The AI community building the future." /> <meta property="og:type" content="website" /> <title>Hugging Face - The AI community building the future.</title> <style> body { margin: 0; } main { background-color: white; min-height: 100vh; padding: 7rem 1rem 8rem 1rem; text-align: center; font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system, BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans, sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol, Noto Color Emoji; } img { width: 6rem; height: 6rem; margin: 0 auto 1rem; } h1 { font-size: 3.75rem; line-height: 1; color: rgba(31, 41, 55, 1); font-weight: 700; box-sizing: border-box; margin: 0 auto; } p, a { color: rgba(107, 114, 128, 1); font-size: 1.125rem; line-height: 1.75rem; max-width: 28rem; box-sizing: border-box; margin: 0 auto; } .dark main { background-color: rgb(11, 15, 25); } .dark h1 { color: rgb(209, 213, 219); } .dark p, .dark a { color: rgb(156, 163, 175); } </style> <script> // On page load or when changing themes, best to add inline in `head` to avoid FOUC const key = "_tb_global_settings"; let theme = window.matchMedia("(prefers-color-scheme: dark)").matches ? "dark" : "light"; try { const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme; if (storageTheme) { theme = storageTheme === "dark" ? "dark" : "light"; } } catch (e) {} if (theme === "dark") { document.documentElement.classList.add("dark"); } else { document.documentElement.classList.remove("dark"); } </script> </head> <body> <main> <img src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg" alt="" /> <div> <h1>429</h1> <p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p> </div> </main> </body> </html>
BreadAi/MuseCan-1-1
BreadAi
"2023-03-21T22:04:01Z"
8
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "dataset:breadlicker45/musenet-encoders-12k", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2023-03-13T10:49:01Z"
--- datasets: - breadlicker45/musenet-encoders-12k ---
shamekhjr/Lunar-Lander-v2-unit8
shamekhjr
"2024-03-14T19:24:23Z"
0
0
null
[ "tensorboard", "LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
"2024-03-14T19:23:47Z"
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -36.42 +/- 45.58 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'exp_name': 'ppo' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': True 'env_id': 'LunarLander-v2' 'total_timesteps': 500000 'learning_rate': 0.0003 'num_envs': 4 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.973 'gae_lambda': 0.98 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'shamekhjr/Lunar-Lander-v2-unit8' 'batch_size': 512 'minibatch_size': 128} ```
commanderxa/ppo-LunarLander-v2
commanderxa
"2023-11-01T16:51:43Z"
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
"2023-10-29T12:05:42Z"
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 291.71 +/- 17.38 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
MayBashendy/ArabicNewSplits4_FineTuningAraBERT_run3_AugV5_k17_task5_organization
MayBashendy
"2024-12-09T23:47:19Z"
163
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:aubmindlab/bert-base-arabertv02", "base_model:finetune:aubmindlab/bert-base-arabertv02", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-12-09T23:41:51Z"
--- library_name: transformers base_model: aubmindlab/bert-base-arabertv02 tags: - generated_from_trainer model-index: - name: ArabicNewSplits4_FineTuningAraBERT_run3_AugV5_k17_task5_organization results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ArabicNewSplits4_FineTuningAraBERT_run3_AugV5_k17_task5_organization This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8032 - Qwk: 0.7246 - Mse: 0.8032 - Rmse: 0.8962 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse | |:-------------:|:------:|:----:|:---------------:|:------:|:------:|:------:| | No log | 0.0385 | 2 | 2.3478 | 0.0485 | 2.3478 | 1.5323 | | No log | 0.0769 | 4 | 1.4657 | 0.1930 | 1.4657 | 1.2107 | | No log | 0.1154 | 6 | 1.3861 | 0.1851 | 1.3861 | 1.1773 | | No log | 0.1538 | 8 | 1.7611 | 0.3084 | 1.7611 | 1.3271 | | No log | 0.1923 | 10 | 2.0248 | 0.2039 | 2.0248 | 1.4230 | | No log | 0.2308 | 12 | 1.5862 | 0.2493 | 1.5862 | 1.2594 | | No log | 0.2692 | 14 | 1.4604 | 0.1209 | 1.4604 | 1.2085 | | No log | 0.3077 | 16 | 1.4653 | 0.1040 | 1.4653 | 1.2105 | | No log | 0.3462 | 18 | 1.6174 | 0.2435 | 1.6174 | 1.2718 | | No log | 0.3846 | 20 | 2.0881 | 0.2249 | 2.0881 | 1.4450 | | No log | 0.4231 | 22 | 2.2308 | 0.1964 | 2.2308 | 1.4936 | | No log | 0.4615 | 24 | 2.0003 | 0.2328 | 2.0003 | 1.4143 | | No log | 0.5 | 26 | 1.8717 | 0.2389 | 1.8717 | 1.3681 | | No log | 0.5385 | 28 | 1.8235 | 0.2241 | 1.8235 | 1.3504 | | No log | 0.5769 | 30 | 1.7405 | 0.3101 | 1.7405 | 1.3193 | | No log | 0.6154 | 32 | 1.6524 | 0.3181 | 1.6524 | 1.2855 | | No log | 0.6538 | 34 | 1.6201 | 0.3173 | 1.6201 | 1.2728 | | No log | 0.6923 | 36 | 1.5903 | 0.3173 | 1.5903 | 1.2611 | | No log | 0.7308 | 38 | 1.5839 | 0.3738 | 1.5839 | 1.2585 | | No log | 0.7692 | 40 | 1.4949 | 0.3445 | 1.4949 | 1.2227 | | No log | 0.8077 | 42 | 1.4052 | 0.3573 | 1.4052 | 1.1854 | | No log | 0.8462 | 44 | 1.2829 | 0.3749 | 1.2829 | 1.1327 | | No log | 0.8846 | 46 | 1.0915 | 0.4431 | 1.0915 | 1.0447 | | No log | 0.9231 | 48 | 1.0561 | 0.4425 | 1.0561 | 1.0277 | | No log | 0.9615 | 50 | 1.0552 | 0.4713 | 1.0552 | 1.0272 | | No log | 1.0 | 52 | 1.3293 | 0.4618 | 1.3293 | 1.1530 | | No log | 1.0385 | 54 | 1.2243 | 0.4541 | 1.2243 | 1.1065 | | No log | 1.0769 | 56 | 0.9461 | 0.4596 | 0.9461 | 0.9727 | | No log | 1.1154 | 58 | 0.9965 | 0.4678 | 0.9965 | 0.9982 | | No log | 1.1538 | 60 | 1.3760 | 0.4510 | 1.3760 | 1.1730 | | No log | 1.1923 | 62 | 1.5175 | 0.3780 | 1.5175 | 1.2319 | | No log | 1.2308 | 64 | 1.0406 | 0.5070 | 1.0406 | 1.0201 | | No log | 1.2692 | 66 | 0.7623 | 0.6399 | 0.7623 | 0.8731 | | No log | 1.3077 | 68 | 0.7693 | 0.5924 | 0.7693 | 0.8771 | | No log | 1.3462 | 70 | 0.8106 | 0.5812 | 0.8106 | 0.9003 | | No log | 1.3846 | 72 | 0.9958 | 0.5329 | 0.9958 | 0.9979 | | No log | 1.4231 | 74 | 1.1326 | 0.5472 | 1.1326 | 1.0642 | | No log | 1.4615 | 76 | 1.0540 | 0.5772 | 1.0540 | 1.0266 | | No log | 1.5 | 78 | 0.9008 | 0.6044 | 0.9008 | 0.9491 | | No log | 1.5385 | 80 | 0.8375 | 0.6130 | 0.8375 | 0.9151 | | No log | 1.5769 | 82 | 0.9499 | 0.6391 | 0.9499 | 0.9747 | | No log | 1.6154 | 84 | 1.1456 | 0.5737 | 1.1456 | 1.0703 | | No log | 1.6538 | 86 | 1.4662 | 0.4862 | 1.4662 | 1.2109 | | No log | 1.6923 | 88 | 1.3067 | 0.5240 | 1.3067 | 1.1431 | | No log | 1.7308 | 90 | 0.8336 | 0.6365 | 0.8336 | 0.9130 | | No log | 1.7692 | 92 | 0.6825 | 0.6952 | 0.6825 | 0.8262 | | No log | 1.8077 | 94 | 0.6987 | 0.6596 | 0.6987 | 0.8359 | | No log | 1.8462 | 96 | 0.6706 | 0.7102 | 0.6706 | 0.8189 | | No log | 1.8846 | 98 | 0.9327 | 0.6369 | 0.9327 | 0.9658 | | No log | 1.9231 | 100 | 1.6015 | 0.5206 | 1.6015 | 1.2655 | | No log | 1.9615 | 102 | 1.8024 | 0.4687 | 1.8024 | 1.3425 | | No log | 2.0 | 104 | 1.5200 | 0.4980 | 1.5200 | 1.2329 | | No log | 2.0385 | 106 | 1.1380 | 0.5661 | 1.1380 | 1.0668 | | No log | 2.0769 | 108 | 0.9398 | 0.6266 | 0.9398 | 0.9694 | | No log | 2.1154 | 110 | 0.8642 | 0.7131 | 0.8642 | 0.9296 | | No log | 2.1538 | 112 | 0.9189 | 0.6663 | 0.9189 | 0.9586 | | No log | 2.1923 | 114 | 0.9631 | 0.6307 | 0.9631 | 0.9814 | | No log | 2.2308 | 116 | 1.0529 | 0.6276 | 1.0529 | 1.0261 | | No log | 2.2692 | 118 | 1.1492 | 0.6030 | 1.1492 | 1.0720 | | No log | 2.3077 | 120 | 1.1043 | 0.6271 | 1.1043 | 1.0509 | | No log | 2.3462 | 122 | 1.2565 | 0.6105 | 1.2565 | 1.1209 | | No log | 2.3846 | 124 | 1.2941 | 0.5923 | 1.2941 | 1.1376 | | No log | 2.4231 | 126 | 1.0658 | 0.6376 | 1.0658 | 1.0324 | | No log | 2.4615 | 128 | 0.8854 | 0.6720 | 0.8854 | 0.9410 | | No log | 2.5 | 130 | 0.9133 | 0.6628 | 0.9133 | 0.9557 | | No log | 2.5385 | 132 | 1.0626 | 0.6114 | 1.0626 | 1.0308 | | No log | 2.5769 | 134 | 0.9316 | 0.6295 | 0.9316 | 0.9652 | | No log | 2.6154 | 136 | 0.7614 | 0.7103 | 0.7614 | 0.8726 | | No log | 2.6538 | 138 | 0.6867 | 0.7083 | 0.6867 | 0.8286 | | No log | 2.6923 | 140 | 0.6916 | 0.7286 | 0.6916 | 0.8317 | | No log | 2.7308 | 142 | 0.7284 | 0.7217 | 0.7284 | 0.8535 | | No log | 2.7692 | 144 | 0.8389 | 0.6888 | 0.8389 | 0.9159 | | No log | 2.8077 | 146 | 0.8900 | 0.6697 | 0.8900 | 0.9434 | | No log | 2.8462 | 148 | 0.8632 | 0.6869 | 0.8632 | 0.9291 | | No log | 2.8846 | 150 | 0.7899 | 0.7146 | 0.7899 | 0.8888 | | No log | 2.9231 | 152 | 0.6991 | 0.7370 | 0.6991 | 0.8361 | | No log | 2.9615 | 154 | 0.6942 | 0.7514 | 0.6942 | 0.8332 | | No log | 3.0 | 156 | 0.7134 | 0.7469 | 0.7134 | 0.8446 | | No log | 3.0385 | 158 | 0.7240 | 0.7302 | 0.7240 | 0.8509 | | No log | 3.0769 | 160 | 0.6538 | 0.7530 | 0.6538 | 0.8086 | | No log | 3.1154 | 162 | 0.6613 | 0.7396 | 0.6613 | 0.8132 | | No log | 3.1538 | 164 | 0.6927 | 0.7592 | 0.6927 | 0.8323 | | No log | 3.1923 | 166 | 0.7670 | 0.7164 | 0.7670 | 0.8758 | | No log | 3.2308 | 168 | 0.8220 | 0.6984 | 0.8220 | 0.9066 | | No log | 3.2692 | 170 | 0.8676 | 0.6647 | 0.8676 | 0.9314 | | No log | 3.3077 | 172 | 1.0876 | 0.6251 | 1.0876 | 1.0429 | | No log | 3.3462 | 174 | 1.1444 | 0.6305 | 1.1444 | 1.0698 | | No log | 3.3846 | 176 | 0.9312 | 0.6720 | 0.9312 | 0.9650 | | No log | 3.4231 | 178 | 0.6769 | 0.7810 | 0.6769 | 0.8227 | | No log | 3.4615 | 180 | 0.5908 | 0.7282 | 0.5908 | 0.7687 | | No log | 3.5 | 182 | 0.5884 | 0.7217 | 0.5884 | 0.7671 | | No log | 3.5385 | 184 | 0.6164 | 0.7671 | 0.6164 | 0.7851 | | No log | 3.5769 | 186 | 0.8198 | 0.7184 | 0.8198 | 0.9054 | | No log | 3.6154 | 188 | 0.9879 | 0.6476 | 0.9879 | 0.9939 | | No log | 3.6538 | 190 | 1.0983 | 0.6268 | 1.0983 | 1.0480 | | No log | 3.6923 | 192 | 1.1270 | 0.6293 | 1.1270 | 1.0616 | | No log | 3.7308 | 194 | 1.0456 | 0.6639 | 1.0456 | 1.0225 | | No log | 3.7692 | 196 | 0.8941 | 0.7015 | 0.8941 | 0.9456 | | No log | 3.8077 | 198 | 0.8145 | 0.7127 | 0.8145 | 0.9025 | | No log | 3.8462 | 200 | 0.7347 | 0.7210 | 0.7347 | 0.8571 | | No log | 3.8846 | 202 | 0.6805 | 0.7605 | 0.6805 | 0.8249 | | No log | 3.9231 | 204 | 0.6799 | 0.7469 | 0.6799 | 0.8245 | | No log | 3.9615 | 206 | 0.7396 | 0.7289 | 0.7396 | 0.8600 | | No log | 4.0 | 208 | 0.7595 | 0.7168 | 0.7595 | 0.8715 | | No log | 4.0385 | 210 | 0.7553 | 0.7296 | 0.7553 | 0.8691 | | No log | 4.0769 | 212 | 0.6921 | 0.7380 | 0.6921 | 0.8319 | | No log | 4.1154 | 214 | 0.7132 | 0.7302 | 0.7132 | 0.8445 | | No log | 4.1538 | 216 | 0.6584 | 0.7385 | 0.6584 | 0.8114 | | No log | 4.1923 | 218 | 0.5945 | 0.7658 | 0.5945 | 0.7711 | | No log | 4.2308 | 220 | 0.5995 | 0.7701 | 0.5995 | 0.7743 | | No log | 4.2692 | 222 | 0.6561 | 0.7379 | 0.6561 | 0.8100 | | No log | 4.3077 | 224 | 0.7635 | 0.7135 | 0.7635 | 0.8738 | | No log | 4.3462 | 226 | 0.8571 | 0.6918 | 0.8571 | 0.9258 | | No log | 4.3846 | 228 | 0.7896 | 0.7143 | 0.7896 | 0.8886 | | No log | 4.4231 | 230 | 0.6810 | 0.7493 | 0.6810 | 0.8253 | | No log | 4.4615 | 232 | 0.6293 | 0.7535 | 0.6293 | 0.7933 | | No log | 4.5 | 234 | 0.6166 | 0.7419 | 0.6166 | 0.7853 | | No log | 4.5385 | 236 | 0.6545 | 0.7513 | 0.6545 | 0.8090 | | No log | 4.5769 | 238 | 0.6836 | 0.7461 | 0.6836 | 0.8268 | | No log | 4.6154 | 240 | 0.6813 | 0.7517 | 0.6813 | 0.8254 | | No log | 4.6538 | 242 | 0.6261 | 0.7566 | 0.6261 | 0.7913 | | No log | 4.6923 | 244 | 0.6206 | 0.7491 | 0.6206 | 0.7878 | | No log | 4.7308 | 246 | 0.6726 | 0.7510 | 0.6726 | 0.8201 | | No log | 4.7692 | 248 | 0.7387 | 0.7239 | 0.7387 | 0.8595 | | No log | 4.8077 | 250 | 0.7216 | 0.7259 | 0.7216 | 0.8495 | | No log | 4.8462 | 252 | 0.7200 | 0.7302 | 0.7200 | 0.8485 | | No log | 4.8846 | 254 | 0.7008 | 0.7324 | 0.7008 | 0.8371 | | No log | 4.9231 | 256 | 0.6866 | 0.7554 | 0.6866 | 0.8286 | | No log | 4.9615 | 258 | 0.7409 | 0.7296 | 0.7409 | 0.8608 | | No log | 5.0 | 260 | 0.7823 | 0.7169 | 0.7823 | 0.8845 | | No log | 5.0385 | 262 | 0.7428 | 0.7253 | 0.7428 | 0.8619 | | No log | 5.0769 | 264 | 0.6744 | 0.7603 | 0.6744 | 0.8212 | | No log | 5.1154 | 266 | 0.6231 | 0.7354 | 0.6231 | 0.7894 | | No log | 5.1538 | 268 | 0.6228 | 0.7483 | 0.6228 | 0.7892 | | No log | 5.1923 | 270 | 0.6677 | 0.7679 | 0.6677 | 0.8171 | | No log | 5.2308 | 272 | 0.7490 | 0.7233 | 0.7490 | 0.8654 | | No log | 5.2692 | 274 | 0.7374 | 0.7408 | 0.7374 | 0.8587 | | No log | 5.3077 | 276 | 0.6928 | 0.7481 | 0.6928 | 0.8324 | | No log | 5.3462 | 278 | 0.6963 | 0.7480 | 0.6963 | 0.8345 | | No log | 5.3846 | 280 | 0.6557 | 0.7372 | 0.6557 | 0.8098 | | No log | 5.4231 | 282 | 0.6208 | 0.7429 | 0.6208 | 0.7879 | | No log | 5.4615 | 284 | 0.6113 | 0.7358 | 0.6113 | 0.7818 | | No log | 5.5 | 286 | 0.6351 | 0.7358 | 0.6351 | 0.7969 | | No log | 5.5385 | 288 | 0.7119 | 0.7213 | 0.7119 | 0.8437 | | No log | 5.5769 | 290 | 0.7646 | 0.6976 | 0.7646 | 0.8744 | | No log | 5.6154 | 292 | 0.7822 | 0.6976 | 0.7822 | 0.8844 | | No log | 5.6538 | 294 | 0.8431 | 0.6846 | 0.8431 | 0.9182 | | No log | 5.6923 | 296 | 0.8059 | 0.6999 | 0.8059 | 0.8977 | | No log | 5.7308 | 298 | 0.7984 | 0.7018 | 0.7984 | 0.8935 | | No log | 5.7692 | 300 | 0.8208 | 0.6882 | 0.8208 | 0.9060 | | No log | 5.8077 | 302 | 0.8909 | 0.6763 | 0.8909 | 0.9439 | | No log | 5.8462 | 304 | 0.9075 | 0.6476 | 0.9075 | 0.9526 | | No log | 5.8846 | 306 | 0.8295 | 0.6583 | 0.8295 | 0.9108 | | No log | 5.9231 | 308 | 0.7346 | 0.7266 | 0.7346 | 0.8571 | | No log | 5.9615 | 310 | 0.6848 | 0.7061 | 0.6848 | 0.8275 | | No log | 6.0 | 312 | 0.6797 | 0.6994 | 0.6797 | 0.8245 | | No log | 6.0385 | 314 | 0.6936 | 0.7061 | 0.6936 | 0.8328 | | No log | 6.0769 | 316 | 0.7169 | 0.7144 | 0.7169 | 0.8467 | | No log | 6.1154 | 318 | 0.7931 | 0.6907 | 0.7931 | 0.8906 | | No log | 6.1538 | 320 | 0.8765 | 0.6473 | 0.8765 | 0.9362 | | No log | 6.1923 | 322 | 0.8896 | 0.6473 | 0.8896 | 0.9432 | | No log | 6.2308 | 324 | 0.9077 | 0.6469 | 0.9077 | 0.9528 | | No log | 6.2692 | 326 | 0.9637 | 0.6439 | 0.9637 | 0.9817 | | No log | 6.3077 | 328 | 1.0161 | 0.6368 | 1.0161 | 1.0080 | | No log | 6.3462 | 330 | 0.9933 | 0.6469 | 0.9933 | 0.9967 | | No log | 6.3846 | 332 | 0.9080 | 0.6469 | 0.9080 | 0.9529 | | No log | 6.4231 | 334 | 0.8706 | 0.6585 | 0.8706 | 0.9331 | | No log | 6.4615 | 336 | 0.8179 | 0.6892 | 0.8179 | 0.9044 | | No log | 6.5 | 338 | 0.7712 | 0.7072 | 0.7712 | 0.8782 | | No log | 6.5385 | 340 | 0.7533 | 0.7200 | 0.7533 | 0.8679 | | No log | 6.5769 | 342 | 0.7641 | 0.7282 | 0.7641 | 0.8741 | | No log | 6.6154 | 344 | 0.7760 | 0.7202 | 0.7760 | 0.8809 | | No log | 6.6538 | 346 | 0.7619 | 0.7246 | 0.7619 | 0.8729 | | No log | 6.6923 | 348 | 0.7421 | 0.7190 | 0.7421 | 0.8615 | | No log | 6.7308 | 350 | 0.7774 | 0.6918 | 0.7774 | 0.8817 | | No log | 6.7692 | 352 | 0.8159 | 0.6892 | 0.8159 | 0.9033 | | No log | 6.8077 | 354 | 0.8256 | 0.6892 | 0.8256 | 0.9086 | | No log | 6.8462 | 356 | 0.7984 | 0.7072 | 0.7984 | 0.8935 | | No log | 6.8846 | 358 | 0.7720 | 0.7267 | 0.7720 | 0.8786 | | No log | 6.9231 | 360 | 0.7540 | 0.7370 | 0.7540 | 0.8683 | | No log | 6.9615 | 362 | 0.7428 | 0.7436 | 0.7428 | 0.8618 | | No log | 7.0 | 364 | 0.7310 | 0.7480 | 0.7310 | 0.8550 | | No log | 7.0385 | 366 | 0.6989 | 0.7568 | 0.6989 | 0.8360 | | No log | 7.0769 | 368 | 0.6974 | 0.7686 | 0.6974 | 0.8351 | | No log | 7.1154 | 370 | 0.7406 | 0.7418 | 0.7406 | 0.8606 | | No log | 7.1538 | 372 | 0.7976 | 0.7192 | 0.7976 | 0.8931 | | No log | 7.1923 | 374 | 0.8175 | 0.7151 | 0.8175 | 0.9042 | | No log | 7.2308 | 376 | 0.7863 | 0.7289 | 0.7863 | 0.8868 | | No log | 7.2692 | 378 | 0.7222 | 0.7461 | 0.7222 | 0.8498 | | No log | 7.3077 | 380 | 0.6902 | 0.7587 | 0.6902 | 0.8308 | | No log | 7.3462 | 382 | 0.6734 | 0.7631 | 0.6734 | 0.8206 | | No log | 7.3846 | 384 | 0.6757 | 0.7587 | 0.6757 | 0.8220 | | No log | 7.4231 | 386 | 0.6999 | 0.7644 | 0.6999 | 0.8366 | | No log | 7.4615 | 388 | 0.7323 | 0.7412 | 0.7323 | 0.8557 | | No log | 7.5 | 390 | 0.7289 | 0.7414 | 0.7289 | 0.8538 | | No log | 7.5385 | 392 | 0.7054 | 0.7483 | 0.7054 | 0.8399 | | No log | 7.5769 | 394 | 0.6899 | 0.7411 | 0.6899 | 0.8306 | | No log | 7.6154 | 396 | 0.6913 | 0.7404 | 0.6913 | 0.8314 | | No log | 7.6538 | 398 | 0.6956 | 0.7296 | 0.6956 | 0.8340 | | No log | 7.6923 | 400 | 0.7063 | 0.7296 | 0.7063 | 0.8404 | | No log | 7.7308 | 402 | 0.7054 | 0.7209 | 0.7054 | 0.8399 | | No log | 7.7692 | 404 | 0.7183 | 0.7250 | 0.7183 | 0.8475 | | No log | 7.8077 | 406 | 0.7491 | 0.7048 | 0.7491 | 0.8655 | | No log | 7.8462 | 408 | 0.7894 | 0.6862 | 0.7894 | 0.8885 | | No log | 7.8846 | 410 | 0.8213 | 0.6911 | 0.8213 | 0.9062 | | No log | 7.9231 | 412 | 0.8621 | 0.6645 | 0.8621 | 0.9285 | | No log | 7.9615 | 414 | 0.8584 | 0.6739 | 0.8584 | 0.9265 | | No log | 8.0 | 416 | 0.8348 | 0.6953 | 0.8348 | 0.9136 | | No log | 8.0385 | 418 | 0.7891 | 0.7145 | 0.7891 | 0.8883 | | No log | 8.0769 | 420 | 0.7414 | 0.7296 | 0.7414 | 0.8610 | | No log | 8.1154 | 422 | 0.7408 | 0.7296 | 0.7408 | 0.8607 | | No log | 8.1538 | 424 | 0.7726 | 0.7152 | 0.7726 | 0.8790 | | No log | 8.1923 | 426 | 0.8243 | 0.7053 | 0.8243 | 0.9079 | | No log | 8.2308 | 428 | 0.8950 | 0.6822 | 0.8950 | 0.9460 | | No log | 8.2692 | 430 | 0.9508 | 0.6547 | 0.9508 | 0.9751 | | No log | 8.3077 | 432 | 0.9510 | 0.6547 | 0.9510 | 0.9752 | | No log | 8.3462 | 434 | 0.9099 | 0.6663 | 0.9099 | 0.9539 | | No log | 8.3846 | 436 | 0.8587 | 0.6776 | 0.8587 | 0.9267 | | No log | 8.4231 | 438 | 0.8007 | 0.6999 | 0.8007 | 0.8948 | | No log | 8.4615 | 440 | 0.7467 | 0.7077 | 0.7467 | 0.8641 | | No log | 8.5 | 442 | 0.6993 | 0.7433 | 0.6993 | 0.8363 | | No log | 8.5385 | 444 | 0.6806 | 0.7357 | 0.6806 | 0.8250 | | No log | 8.5769 | 446 | 0.6813 | 0.7357 | 0.6813 | 0.8254 | | No log | 8.6154 | 448 | 0.7007 | 0.7433 | 0.7007 | 0.8371 | | No log | 8.6538 | 450 | 0.7351 | 0.7430 | 0.7351 | 0.8574 | | No log | 8.6923 | 452 | 0.7815 | 0.7122 | 0.7815 | 0.8840 | | No log | 8.7308 | 454 | 0.8178 | 0.6900 | 0.8178 | 0.9043 | | No log | 8.7692 | 456 | 0.8455 | 0.6705 | 0.8455 | 0.9195 | | No log | 8.8077 | 458 | 0.8556 | 0.6705 | 0.8556 | 0.9250 | | No log | 8.8462 | 460 | 0.8456 | 0.6705 | 0.8456 | 0.9196 | | No log | 8.8846 | 462 | 0.8301 | 0.6900 | 0.8301 | 0.9111 | | No log | 8.9231 | 464 | 0.8241 | 0.6900 | 0.8241 | 0.9078 | | No log | 8.9615 | 466 | 0.8036 | 0.6900 | 0.8036 | 0.8964 | | No log | 9.0 | 468 | 0.7736 | 0.7246 | 0.7736 | 0.8796 | | No log | 9.0385 | 470 | 0.7457 | 0.7370 | 0.7457 | 0.8635 | | No log | 9.0769 | 472 | 0.7358 | 0.7414 | 0.7358 | 0.8578 | | No log | 9.1154 | 474 | 0.7424 | 0.7370 | 0.7424 | 0.8616 | | No log | 9.1538 | 476 | 0.7489 | 0.7370 | 0.7489 | 0.8654 | | No log | 9.1923 | 478 | 0.7485 | 0.7370 | 0.7485 | 0.8652 | | No log | 9.2308 | 480 | 0.7383 | 0.7414 | 0.7383 | 0.8593 | | No log | 9.2692 | 482 | 0.7342 | 0.7599 | 0.7342 | 0.8569 | | No log | 9.3077 | 484 | 0.7327 | 0.7643 | 0.7327 | 0.8560 | | No log | 9.3462 | 486 | 0.7311 | 0.7643 | 0.7311 | 0.8551 | | No log | 9.3846 | 488 | 0.7324 | 0.7457 | 0.7324 | 0.8558 | | No log | 9.4231 | 490 | 0.7443 | 0.7434 | 0.7443 | 0.8627 | | No log | 9.4615 | 492 | 0.7582 | 0.7348 | 0.7582 | 0.8707 | | No log | 9.5 | 494 | 0.7656 | 0.7348 | 0.7656 | 0.8750 | | No log | 9.5385 | 496 | 0.7754 | 0.7348 | 0.7754 | 0.8806 | | No log | 9.5769 | 498 | 0.7873 | 0.7246 | 0.7873 | 0.8873 | | 0.3159 | 9.6154 | 500 | 0.7977 | 0.7246 | 0.7977 | 0.8931 | | 0.3159 | 9.6538 | 502 | 0.8088 | 0.6827 | 0.8088 | 0.8993 | | 0.3159 | 9.6923 | 504 | 0.8206 | 0.6827 | 0.8206 | 0.9059 | | 0.3159 | 9.7308 | 506 | 0.8239 | 0.6827 | 0.8239 | 0.9077 | | 0.3159 | 9.7692 | 508 | 0.8227 | 0.6827 | 0.8227 | 0.9070 | | 0.3159 | 9.8077 | 510 | 0.8173 | 0.6827 | 0.8173 | 0.9040 | | 0.3159 | 9.8462 | 512 | 0.8140 | 0.6827 | 0.8140 | 0.9022 | | 0.3159 | 9.8846 | 514 | 0.8104 | 0.6827 | 0.8104 | 0.9002 | | 0.3159 | 9.9231 | 516 | 0.8063 | 0.7168 | 0.8063 | 0.8979 | | 0.3159 | 9.9615 | 518 | 0.8040 | 0.7246 | 0.8040 | 0.8966 | | 0.3159 | 10.0 | 520 | 0.8032 | 0.7246 | 0.8032 | 0.8962 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.0+cu118 - Datasets 2.21.0 - Tokenizers 0.19.1
prxy5604/491bc1ee-1bba-4d3e-ad55-95a6a84477ce
prxy5604
"2025-02-09T04:56:55Z"
12
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "custom_code", "base_model:NousResearch/Yarn-Solar-10b-64k", "base_model:adapter:NousResearch/Yarn-Solar-10b-64k", "license:apache-2.0", "region:us" ]
null
"2025-02-09T04:30:51Z"
--- library_name: peft license: apache-2.0 base_model: NousResearch/Yarn-Solar-10b-64k tags: - axolotl - generated_from_trainer model-index: - name: 491bc1ee-1bba-4d3e-ad55-95a6a84477ce results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: NousResearch/Yarn-Solar-10b-64k bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 1cd7d8fbe53718ce_train_data.json ds_type: json format: custom path: /workspace/input_data/1cd7d8fbe53718ce_train_data.json type: field_instruction: question field_output: logical_form_pretty format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: 5 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: prxy5604/491bc1ee-1bba-4d3e-ad55-95a6a84477ce hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 64 lora_dropout: 0.3 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 450 micro_batch_size: 8 mlflow_experiment_name: /tmp/1cd7d8fbe53718ce_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1e-5 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 150 saves_per_epoch: null sequence_len: 1024 special_tokens: pad_token: </s> strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 3c029bfa-83e4-41ec-b6f9-c5f10e2c4b1b wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 3c029bfa-83e4-41ec-b6f9-c5f10e2c4b1b warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 491bc1ee-1bba-4d3e-ad55-95a6a84477ce This model is a fine-tuned version of [NousResearch/Yarn-Solar-10b-64k](https://huggingface.co/NousResearch/Yarn-Solar-10b-64k) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2159 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 77 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0131 | 1 | 3.2301 | | 1.0269 | 0.6536 | 50 | 0.2159 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
cleanrl/Alien-v5-cleanba_ppo_envpool_impala_atari_wrapper-seed2
cleanrl
"2023-03-07T16:53:03Z"
0
0
cleanrl
[ "cleanrl", "tensorboard", "Alien-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
"2023-02-17T15:02:46Z"
--- tags: - Alien-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Alien-v5 type: Alien-v5 metrics: - type: mean_reward value: 4168.00 +/- 2286.61 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **Alien-v5** This is a trained model of a PPO agent playing Alien-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name cleanba_ppo_envpool_impala_atari_wrapper --env-id Alien-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Alien-v5-cleanba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/cleanba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Alien-v5-cleanba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Alien-v5-cleanba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/poetry.lock poetry install --all-extras python cleanba_ppo_envpool_impala_atari_wrapper.py --distributed --learner-device-ids 1 2 3 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Alien-v5 --seed 2 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'actor_devices': ['gpu:0'], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 15360, 'capture_video': False, 'clip_coef': 0.1, 'concurrency': True, 'cuda': True, 'distributed': True, 'ent_coef': 0.01, 'env_id': 'Alien-v5', 'exp_name': 'cleanba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'global_learner_decices': ['gpu:1', 'gpu:2', 'gpu:3', 'gpu:5', 'gpu:6', 'gpu:7'], 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3], 'learner_devices': ['gpu:1', 'gpu:2', 'gpu:3'], 'learning_rate': 0.00025, 'local_batch_size': 7680, 'local_minibatch_size': 1920, 'local_num_envs': 60, 'local_rank': 0, 'max_grad_norm': 0.5, 'minibatch_size': 3840, 'norm_adv': True, 'num_envs': 120, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 3255, 'profile': False, 'save_model': True, 'seed': 2, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanba', 'world_size': 2} ```
Sudhanshu1304/table-extraction
Sudhanshu1304
"2025-03-23T10:21:32Z"
0
0
null
[ "table-extraction", "paddleocr", "huggingface", "license:mit", "region:us" ]
null
"2025-03-16T10:53:10Z"
--- tags: - table-extraction - paddleocr - huggingface license: mit --- # **🌟 Table Extraction Tool: OCR & Computer Vision for Structured Data** [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![Build Status](https://img.shields.io/badge/build-passing-brightgreen.svg)](https://github.com/Sudhanshu1304/table-transformer) [![Stars](https://img.shields.io/github/stars/Sudhanshu1304/table-transformer.svg)](https://github.com/Sudhanshu1304/table-transformer/stargazers) [![Watchers](https://img.shields.io/github/watchers/Sudhanshu1304/table-transformer.svg)](https://github.com/Sudhanshu1304/table-transformer/watchers) ## Overview Table Transformer is an advanced open-source tool that leverages state-of-the-art OCR and computer vision techniques to extract structured tabular data from images. It is ideal for enhancing LLM preprocessing, powering data analysis pipelines, and automating your data extraction tasks. ## Features - 📊 **Automatic Table Detection**: Effortlessly detect tables in images. - 📝 **OCR-based Document Processing**: Extract text with high accuracy. - 🧠 **Integrated Models**: Seamlessly combine OCR and table detection models. - 💾 **Flexible Export Options**: Export data as DataFrame, HTML, CSV, and more. --- ## **Tool Overview** <div align="center"> <!-- First Row --> <img src="images/image1.png" alt="Image upload" width="45%" style="margin: 10px;"> <img src="images/image2.png" alt="Table detection & extraction" width="45%" style="margin: 10px;"> <!-- Second Row --> <img src="images/image3.png" alt="Table in HTML format" width="45%" style="margin: 10px;"> <img src="images/image4.png" alt="Table exported as CSV" width="45%" style="margin: 10px;"> </div> --- ## **Open-Source Tools Used** - **[PaddleOCR](https://github.com/PaddlePaddle/PaddleOCR)**: For text extraction. - **[Hugging Face Table Detection](https://huggingface.co/foduucom/table-detection-and-extraction)**: For table structure detection. --- ## **Installation** ### **Prerequisites** - Python 3.8+ - Conda ### **Setup** 1. **Clone the Repository** Clone the repository to your local machine: ```bash git clone https://github.com/Sudhanshu1304/table-transformer.git cd table-transformer ``` 2. **Create and Activate Conda Environment** Create a new conda environment and activate it: ```bash conda create --name myenv python=3.12.7 conda activate myenv ``` 3. **Install PaddlePaddle** Install PaddlePaddle in the conda environment: ```bash python -m pip install paddlepaddle==3.0.0rc1 -i https://www.paddlepaddle.org.cn/packages/stable/cpu/ ``` 4. **Install PaddleOCR** Install PaddleOCR: ```bash pip install paddleocr ``` 5. **Install Additional Dependencies** Install other required packages: ```bash pip install ultralytics pandas pip install streamlit ``` ### **Project Structure** ``` project/ ├── src/ │ ├── streamlit_app.py # Streamlit application │ ├── table_creator/ │ │ └── processing.py # Core processing logic │ ├── models/ │ │ └── text.py # table detection and text recognition │ ├── requirements.txt # Dependencies ├── README.md # Project documentation └── .gitignore # Git ignore configuration ``` ### **Usage** Run the Streamlit app to interact with the tool: ```bash streamlit run src/streamlit_app.py ``` ### **Contributions** Contributions are welcome! Please fork the repository and submit a pull request with your improvements or new features. ### **License** This project is licensed under the MIT License. --- ## **Connect with Us** Stay updated and connect for any queries or contributions: - **GitHub**: [Sudhanshu1304](https://github.com/Sudhanshu1304) - **LinkedIn**: [Sudhanshu Pandey](https://www.linkedin.com/in/sudhanshu-pandey-847448193/) - **Medium**: [@sudhanshu.dpandey](https://medium.com/@sudhanshu.dpandey) --- ## **Support** If you find this tool useful, please consider giving it a ⭐ on GitHub. Your support is greatly appreciated! Happy Extracting!
bustamiyusoef/Nougat_PLADA_Adv_En_De_Tar2k_01
bustamiyusoef
"2025-03-26T04:58:04Z"
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2025-03-26T01:55:34Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Benz3010/phi2-gloss-generator
Benz3010
"2025-03-28T12:39:01Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2025-03-28T12:38:48Z"
<!DOCTYPE html> <html class="" lang="en"> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no" /> <meta name="description" content="We're on a journey to advance and democratize artificial intelligence through open source and open science." /> <meta property="fb:app_id" content="1321688464574422" /> <meta name="twitter:card" content="summary_large_image" /> <meta name="twitter:site" content="@huggingface" /> <meta property="og:title" content="Hugging Face - The AI community building the future." /> <meta property="og:type" content="website" /> <title>Hugging Face - The AI community building the future.</title> <style> body { margin: 0; } main { background-color: white; min-height: 100vh; padding: 7rem 1rem 8rem 1rem; text-align: center; font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system, BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans, sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol, Noto Color Emoji; } img { width: 6rem; height: 6rem; margin: 0 auto 1rem; } h1 { font-size: 3.75rem; line-height: 1; color: rgba(31, 41, 55, 1); font-weight: 700; box-sizing: border-box; margin: 0 auto; } p, a { color: rgba(107, 114, 128, 1); font-size: 1.125rem; line-height: 1.75rem; max-width: 28rem; box-sizing: border-box; margin: 0 auto; } .dark main { background-color: rgb(11, 15, 25); } .dark h1 { color: rgb(209, 213, 219); } .dark p, .dark a { color: rgb(156, 163, 175); } </style> <script> // On page load or when changing themes, best to add inline in `head` to avoid FOUC const key = "_tb_global_settings"; let theme = window.matchMedia("(prefers-color-scheme: dark)").matches ? "dark" : "light"; try { const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme; if (storageTheme) { theme = storageTheme === "dark" ? "dark" : "light"; } } catch (e) {} if (theme === "dark") { document.documentElement.classList.add("dark"); } else { document.documentElement.classList.remove("dark"); } </script> </head> <body> <main> <img src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg" alt="" /> <div> <h1>429</h1> <p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p> </div> </main> </body> </html>
sarahpann/all_at_once_model
sarahpann
"2025-03-23T15:57:21Z"
2
0
transformers
[ "transformers", "safetensors", "modernbert", "fill-mask", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2025-03-21T21:46:31Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
StepLaw/StepLaw-N_268M-D_24.0B-LR1.56E-02-BS131072
StepLaw
"2025-04-10T08:31:32Z"
0
0
null
[ "safetensors", "step1", "region:us" ]
null
"2025-04-10T08:29:59Z"
<!DOCTYPE html> <html class="" lang="en"> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no" /> <meta name="description" content="We're on a journey to advance and democratize artificial intelligence through open source and open science." /> <meta property="fb:app_id" content="1321688464574422" /> <meta name="twitter:card" content="summary_large_image" /> <meta name="twitter:site" content="@huggingface" /> <meta property="og:title" content="Hugging Face - The AI community building the future." /> <meta property="og:type" content="website" /> <title>Hugging Face - The AI community building the future.</title> <style> body { margin: 0; } main { background-color: white; min-height: 100vh; padding: 7rem 1rem 8rem 1rem; text-align: center; font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system, BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans, sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol, Noto Color Emoji; } img { width: 6rem; height: 6rem; margin: 0 auto 1rem; } h1 { font-size: 3.75rem; line-height: 1; color: rgba(31, 41, 55, 1); font-weight: 700; box-sizing: border-box; margin: 0 auto; } p, a { color: rgba(107, 114, 128, 1); font-size: 1.125rem; line-height: 1.75rem; max-width: 28rem; box-sizing: border-box; margin: 0 auto; } .dark main { background-color: rgb(11, 15, 25); } .dark h1 { color: rgb(209, 213, 219); } .dark p, .dark a { color: rgb(156, 163, 175); } </style> <script> // On page load or when changing themes, best to add inline in `head` to avoid FOUC const key = "_tb_global_settings"; let theme = window.matchMedia("(prefers-color-scheme: dark)").matches ? "dark" : "light"; try { const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme; if (storageTheme) { theme = storageTheme === "dark" ? "dark" : "light"; } } catch (e) {} if (theme === "dark") { document.documentElement.classList.add("dark"); } else { document.documentElement.classList.remove("dark"); } </script> </head> <body> <main> <img src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg" alt="" /> <div> <h1>429</h1> <p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p> </div> </main> </body> </html>
neody/nemma-100m-it
neody
"2024-11-22T04:54:02Z"
6
0
null
[ "safetensors", "gemma2", "en", "dataset:BAAI/Infinity-Instruct", "base_model:neody/nemma-100m", "base_model:finetune:neody/nemma-100m", "license:mit", "region:us" ]
null
"2024-11-16T07:33:33Z"
--- license: mit datasets: - BAAI/Infinity-Instruct language: - en base_model: - neody/nemma-100m ---
rodekruis/sml-ukr-message-classifier-3
rodekruis
"2024-11-01T17:43:43Z"
5
0
setfit
[ "setfit", "safetensors", "bert", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:intfloat/e5-base-v2", "base_model:finetune:intfloat/e5-base-v2", "region:us" ]
text-classification
"2024-11-01T17:43:25Z"
--- base_model: intfloat/e5-base-v2 library_name: setfit metrics: - accuracy pipeline_tag: text-classification tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer widget: - text: 'Good afternoon! Certified psychologist with 20 years of experience. My specialization: *Relations *Crises * Emotional well-being * Parent-child relationships * Professional development * Work with dependencies *Self-esteem * Individual requests With my approach, every client is unique and I will help you reach your potential. The first consultation is acquaintance and recommendations. Online on WhatsApp, Telegram, Skype platforms. Recording exclusively in private messages. Take care of yourself!' - text: Good evening! Please give me the contacts of a trusted refrigerator master. - text: People Needed for Remote Employment Schedule - Free from 850$ Weekly with us training from you only Desire! From anywhere in the world The number of places is limited! For details, write to me + in Personal - text: Guys, someone sent parcels with this carrier from Odessa to Plovdiv - text: Tell me a site in Bulgaria for the sale of animals inference: true --- # SetFit with intfloat/e5-base-v2 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [intfloat/e5-base-v2](https://huggingface.co/intfloat/e5-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [intfloat/e5-base-v2](https://huggingface.co/intfloat/e5-base-v2) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 512 tokens - **Number of Classes:** 20 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:---------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | CVA PROGRAMS | <ul><li>"And one more thing... This is a very important point!! The Red Cross compensates Ukrainians for the cost of medicines and even surgery. But you need to provide supporting documents.... Doctor's prescription and receipts. I know that the Ukrainian woman was fully paid for the operation... Not cheap!!!"</li><li>'Good afternoon. My sister needs to close it, but it is not included in the lease agreement. Is it possible to close under this contract? Perhaps the Red Cross will be able to register it at a common address? Has anyone encountered this? Thank you in advance.'</li><li>'Contact the Red Cross, they will help you'</li></ul> | | EDUCATION | <ul><li>"And after vocational school, you won't be able to enter a university?"</li><li>'**We prepare applicants for admission to universities in Slovakia for state-funded departments. ** HIGHER AND SECONDARY EDUCATION IN SLOVAKIA IS: Free education in Slovakia Higher education in Slovakia for citizens of Ukraine, Kazakhstan, Belarus is FREE, provided that they study in the national language, that is, in Slovak. Paid form of education Admission without entrance exams, with a certificate of Slovak B1 level Ability to combine study and work The Slovak language is very close to all Slavic languages High level of security Diploma recognized in all European countries We will help you: * choose a university and a specialty * Prepare a package of documents * Recommend Slovak language courses * book a place in the dormitory * Apply for a residence permit based on studies * Provide support at all stages Our Chat <URL> **You can get a FREE consultation here **<URL>'</li><li>'Good afternoon, who is in Sofia, tell me, is there a Ukrainian school there? Still interested in sensory integration and speech therapy?thank you'</li></ul> | | LEGAL | <ul><li>'Hello) Where do I apply for a waiver of temporary protection? I need to be given a paper of refusal'</li><li>'Good afternoon. Which application can I use to see the availability of UKR status?'</li><li>'Good evening, how to refuse the status of ukr? Moving to another country'</li></ul> | | FOOD | <ul><li>'I will give rice for free, deadline until 05/01/2024, there is a nuance: I saw two weevils inside, there are four packs'</li><li>"You were invited by the church not by the Hope of the Generations, but by the New Generation. These are two different churches. It's just that they have a service on the same day and in the same hall, only the time is different. You will need to sit out the service for two hours and after it they will be issued. Help - pasta, cereals, beans. You'll report it yourself."</li><li>'Exchange, drip coffee maker for canned meat.'</li></ul> | | TRANSLATION/LANGUAGE | <ul><li>"ENGLISH ANGLIIS'KA FOR ADULTS AND CHILDREN Online English classes I propose: - individual approach (based on your needs) - high-quality materials and interesting activities - training of all aspects (speaking, listening, reading and writing) - Help with homework - Constant support and motivation I work with all levels. I have a philological education and teaching experience +380675519090 -viber, whatsapp @IYourTeacher - telegram"</li><li>'Good morning, please tell me Bulgarian language courses as part of a group, in Varna'</li><li>"From a subscriber: Hungarian classes are free for refugees. Lots of useful links and testimonials Organisation: ELTE University (teacher Kata) Level: A1 from scratch Classes: 1 time a week from 17:30 to 18:45 (19.00) on Mondays Terms: 2 times for 2-3 months Email: [email protected] Volunteers signed us up first in a Viber chat, then in a group for courses. And now it's the same scheme, only with different teachers. From time to time, there are sets. There was no textbook, each time the teacher brought printouts of the exercises. There were DZ. She spoke to us in English and Hungarian, drew on the blackboard, if it was not clear. I joined from the 10th lesson, so it was difficult at first. A group of 10-12 people of different levels and ages, at the end there were about 5 of us. They wrote a lot, did interesting exercises, learned about the family, body parts, face, home and family, transport and the street, songs, poems, there were complex texts. Not very systematic. There was no exam, we continued to study after the break. Organization: Menedek (classes at the Hungarian Converzum School) Level: A1 from scratch Classes: 2 times a week from 9.00 to 11.15 (2 hours - Tue, Thu) Timeframe: 90 days Email: [email protected] Hungarian language school in Buda (bus 9, stop near the entrance). Small groups of 5-6 people. We were given a textbook, a notebook and a pen for free. The teacher is Hungarian, knows a little English, but tried to talk to us in Hungarian. If something is not clear, I drew it on the blackboard. Everyday Hungarian textbook (by the way, I can give it away, I have others). We studied for 90 days, then passed the exam and then the grant was not extended, probably, there were no more classes. Each lesson began with a repetition, sometimes there were DZ, they started with the alphabet, then the topics: about yourself, home, family tree, transport, questions, hobbies. There were no difficult topics. We were prepared for the exam, we were sent the same task for training, and then there was a similar one. The exam consisted of 2 parts - oral and written. Issued a certificate of completion of the courses The Menedek organization also has free courses in the office. They send the schedule by mail or Viber/Telegram. You can look at the website of the organization <URL> and write to the mail [email protected] I didn't go myself, it's beginner level and mostly in the mornings. Organization: CARITAS Level: A1, A2 (there was a level test) Classes: 2 times a week from 16.00 to 19.00 (3 hours with a break) Timeframe: 7 months and now I am walking Teacher: Melinda (2 more classes were in another group with Levi) Email: [email protected] The training lasts for 90 days, we are already completing the 2nd cycle. A good teacher, they finally began to teach systematically and I spoke a little :) The MagyarOk textbook was given out for free, the second time we were given another book with exercises, now we do tasks both in the textbook and in the notebook. For the first 90 days, we studied only with the textbook, the teacher gives a lot of practice - at the beginning and at the end of the lesson. We communicate in a group, ask each other questions, more repetition and new topics, the game format was at first. There is almost no theory, we immediately analyze it with examples and talk. For the first 90 days, she did not give DZ, now she gives a small one after each lesson. The group was 5-6 people, then about 10 and now there are 5-6 of us again. Mostly girls, sometimes foreigners. The teacher speaks mainly Hungarian, sometimes he can explain in English, he knows a couple of words in Russian. After the first 90 days, there was a written exam, we repeated a little before it. Then they gave everyone certificates of completion of the stage. At the moment, these are my best courses I've tried here. It helps me to additionally study at home in the Duolingo app, read theory in Telegram channels and separately in documents - everywhere I see :) If you have any additional information or notice any inaccuracies, please email me @yakatka. #hungarian_for_refugees #hungarian_language_school #hungarian_courses"</li></ul> | | NFI | <ul><li>'I will exchange a large pack of diapers for a large washing gel or capsules (at least for 30 washes)'</li><li>'Hello, I will give things for a 98 cm boy in exchange for fruit for the child. Pickup Borodinsky. Write to me to send a photo. Have a nice day everyone'</li><li>'Registration !for urological pads for IDPs in Lviv from the Rokada Charitable Foundation is open Registration via the link <URL>'</li></ul> | | HEALTH | <ul><li>'Can anyone recommend Ukr doctors, neurologists and a doctor who can help with the spine? Kosice, Presov'</li><li>"Good morning! Can you please recommend a good pediatrician or a good children's clinic?"</li><li>'Good morning. Where in Kramatorsk can I get an ultrasound for a fee?'</li></ul> | | ANOMALY | <ul><li>'Tell your pensioner friends that a swindler walks from house to house, introduces himself as a doctor from the 5th office, offers portable devices for the treatment of joints, which are allegedly distributed by the Red Cross and they cost 20 thousand, but if the pensioner presents documents, then a discount of 5 thousand. This swindler, has psychological influence, brazenly enters the house and scams for money. The most expensive such device, costs a thousand gr.'</li><li>'If you receive a link of this format, please do not click.This is a hack.They are trying to write off money from the card!!! !'</li><li>'Making money on CryPto Can be used from any device Time 1-2 hours a day From $250/day Can be combined with the main job Write + in PM'</li></ul> | | OTHER PROGRAMS/NGOS | <ul><li>'And what if people receive from the UN, there is a refusal?'</li><li>"Girls, who designed children's online through the action of e-Baby?"</li><li>'How do you even look for 40+ ?'</li></ul> | | CONNECTIVITY | <ul><li>"Good afternoon! Who has Vodafone? Does anyone know something? Why doesn't it work here in Slovakia?"</li><li>'Who wants to meet in Poland write'</li><li>'Poker and mafia for Ukrainians - every Thursday at Lamparen Nearest game night: Thursday 2.05 18.00 Lamparen (Hlavna 115) Cost: 3-5EUR (free for beginners) New acquaintances and incredible emotions are guaranteed. There are also many other different board games for free. During the evening, you can try to play both mafia and poker (or play one thing all evening). Pre-registration is required. Registration & Details: @max_karpets.'</li></ul> | | CHILDREN | <ul><li>'ACTORS STUDIO The White Theater announces additional recruitment of children and teenagers to Ukrainian-speaking acting groups! If your child dreams: plunge into the fascinating world of reincarnation; try yourself in a variety of roles; feel free on stage; master the skills of self-presentation; learn to speak in a way that people want to listen to; learn how to parse texts of varying complexity, etc. If you have any questions, write to Telegram: @TaisijaB'</li><li>"For children aged 2-5 years (inclusive) This is for those who haven't received these kits yet 13.02.2024 And 14.02.2024 And 15.02.2024 From 10.00 a.m. to 1.00 p.m. In Sunny Beach At the IB of the Ministerial Council, in the second building, we will be giving out IOM art kits (age 2-5 years inclusive) The set includes: Paints Markers Pencils Coloring Magazines Magazines with stickers Glue Colored Paper Pencil Plain It's all packed in a folder. It is necessary to have the original closed and the copy closed (where the child is inscribed) with you. Without a copy of the closed, they do not issue sets! !!!!!!!!!!!!!! <URL>"</li><li>"**For children aged 2-5 years** (inclusive) For those who haven't received these kits yet 23.01.2024 And 24.01.2024 And 25.01.2024 From 10.00 a.m. to 1.00 p.m. In Sunny Beach At the IB of the Ministerial Council, in the second building, we will be giving out IOM art kits (age 2-5 years inclusive) The set includes: Paints Markers Pencils Coloring Pages Stickers glue Colored Paper Pencil Plain It's all in a folder. It is necessary to have a closed and a copy of the closed (where the child is inscribed) with you. Without a copy of the closed, they do not issue sets! !!!!!!!!!!!!!! <URL>"</li></ul> | | PARCEL | <ul><li>'Can I send a parcel with you from Odessa?'</li><li>'Good afternoon! Can someone share the contacts of the courier who is Budva-Bar?) I will be very grateful'</li><li>"Or in Ukraine? It is necessary to urgently hand over the child's birth certificate"</li></ul> | | PETS | <ul><li>'Please tell me a good veterinarian, I want to take a cat, if there is somewhere on Sambirskaya, it would be good'</li><li>'Good evening, please tell me what documents are needed to enter with a dog'</li><li>'Hello, we give the cat into good hands! Looking for a home) 2.5 months'</li></ul> | | TRANSPORT/MOVEMENT | <ul><li>'Has anyone flown with Ryanair with an extended passport? Not renewed, but one with a seal for 5 years. Is everything okay?'</li><li>'Transfer in Bulgaria. A cozy and safe sedan for 3 passengers and luggage. You can do it with your pets. The price of the trip will be a VERY pleasant surprise!! Territorialo Plovdiv UA +380997858133 (Telegram, Viber) BG +359888203042 (Telegram, Viber)'</li><li>'Hello! By what number is it possible to order a taxi from Budva to Podgorica airport. Thanks'</li></ul> | | SHELTER | <ul><li>'Good morning ! Ukrainian Montenegrins, please tell me where you can see apartments for rent for a long time in Budva or Podgorica?'</li><li>'And what kind of city is it? Are refugees accepted from Ukraine?'</li><li>'Tell me, please, what is the average price for renting a 3-room apartment in Varna? Thank you just at least the range'</li></ul> | | WORK/JOBS | <ul><li>'Job search on the Internet: features of the Bulgarian labor market, how to respond to a vacancy, offers for Ukrainians You will learn about all this at our FREE offline lecture on the course "Career Development in Bulgaria" from the Network of Ukrainian Educational Hubs and the Association of Innovative and Digital Education, which will be held with the support of the Central European Initiative (CEI) Speaker: Svetlozar Petrov - Managing Director of Jobtaigr OOD. When? The lecture will take place on April 22, 2024 at 18:00 Eastern European Time Duration: 1 hour Venue: Sofia, blvd. Slivnitsa, 188 in. Would. Hurry! REGISTER HERE <URL>'</li><li>'WE INVITE YOU TO WORK! Sales consultant of smartphones and accessories Truskavets We offer: High wages 14000 - 18000 thousand rubles Partial compensation for travel Convenient shift schedule Staff training PHONES OF THE PERSONNEL DEPARTMENT: 094-400-45-00 050-728-48-15 098-353-16-30'</li><li>'Good afternoon! We are looking for pharmacists and assistants, preferably with experience, to work in a large network of pharmacies in Sofia'</li></ul> | | PSS & RFL | <ul><li>'Heart-to-heart conversations over tea 12.04.2024, Friday Beginning at 16.00 For whom: for young people 15+ and adults Hlavna, 68, (our second floor, room 1.04) Blue Dot Kosice spaces We invite everyone who is interested in psychology and personal development to tea with a psychologist This week we continue to explore the theme of LONELINESS AND SOLITUDE. This Friday, we will talk about relationships, about the reasons why this feeling appears in a couple, in a team. About loneliness in emigration. We will find out what is the power of loneliness and how to strengthen your weak points. Interesting exercises are also waiting for you - self-exploration and discussion. Pre-registration is mandatory: <URL>'</li><li>'Vera Stepanova: "Help me find my sonToday, at about 15-00, in the city of Drohobych (city center), my son, Stepanov Alexander Sergeevich, born in 2010, was lost. (13 years) He was wearing a purple T-shirt, gray sweatpants and black sneakers. I ask for help in finding the child, I have informed the Police."'</li><li>'Good evening, if there is a Russian-speaking psychologist in Bratislava?'</li></ul> | | MONEY/BANKING | <ul><li>'Good afternoon, maybe someone has hryvnias on the card, I will give cash euros in PM!!'</li><li>'Tell me, please, where can I get a duplicate of the identification code and pension certificate if I am an IDP?'</li><li>'Good afternoon, who can throw hryvnias on the card in exchange for cash in euros or leva. The city of Varna will meet where it will be convenient for you. Urgent Need Help'</li></ul> | | CAR | <ul><li>'Hello, please help, where to go in the situation the guy got into an accident on the car of the company, they demand money for the repair of the car, they have already written off more than 8000 thousand zlotys, they do not provide an invoice, and they say more and more money is already repairing 12000 thousand'</li><li>'Selling a car Renault Laguna Year 2005 Complete set Automaton 5 t lion Burgas'</li><li>'Do you sell a car with Ukrainian license plates in the group? We want to buy it, because we are going home, write to the PM'</li></ul> | | GOODS/SERVICES | <ul><li>'Selling tires Falken 225/45 R17 40 euros Without wired charging 5 euros Humidifier EUR10'</li><li>'Selling new sneakers 350,00UAH'</li><li>'Hello everyone, I am selling an Acer laptop We bought it as a gift, but the child was given 2 identical ones, so it is no longer relevant Condition : Brand New Price :300 EUR Bought for 320EUR Suitable for both a child for study and a student (bought for this purpose) 2 Year Warranty Set : Box , Charger , Check, Laptop , Documents'</li></ul> | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("rodekruis/sml-ukr-message-classifier-3") # Run inference preds = model("Tell me a site in Bulgaria for the sale of animals") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:-------|:----| | Word count | 3 | 39.524 | 722 | | Label | Training Sample Count | |:---------------------|:----------------------| | ANOMALY | 60 | | CAR | 47 | | CHILDREN | 47 | | CONNECTIVITY | 41 | | CVA PROGRAMS | 46 | | EDUCATION | 44 | | FOOD | 57 | | GOODS/SERVICES | 51 | | HEALTH | 56 | | LEGAL | 44 | | MONEY/BANKING | 55 | | NFI | 57 | | OTHER PROGRAMS/NGOS | 55 | | PARCEL | 45 | | PETS | 48 | | PSS & RFL | 51 | | SHELTER | 47 | | TRANSLATION/LANGUAGE | 56 | | TRANSPORT/MOVEMENT | 48 | | WORK/JOBS | 45 | ### Training Hyperparameters - batch_size: (16, 16) - num_epochs: (4, 4) - max_steps: -1 - sampling_strategy: oversampling - body_learning_rate: (2e-05, 1e-05) - head_learning_rate: 0.01 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - l2_weight: 0.01 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: True ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:------:|:-----:|:-------------:|:---------------:| | 0.0000 | 1 | 0.2186 | - | | 0.0008 | 50 | 0.333 | - | | 0.0017 | 100 | 0.3241 | - | | 0.0025 | 150 | 0.3388 | - | | 0.0034 | 200 | 0.3241 | - | | 0.0042 | 250 | 0.3196 | - | | 0.0051 | 300 | 0.3178 | - | | 0.0059 | 350 | 0.309 | - | | 0.0067 | 400 | 0.274 | - | | 0.0076 | 450 | 0.2907 | - | | 0.0084 | 500 | 0.2683 | - | | 0.0093 | 550 | 0.2665 | - | | 0.0101 | 600 | 0.2514 | - | | 0.0110 | 650 | 0.2404 | - | | 0.0118 | 700 | 0.2326 | - | | 0.0126 | 750 | 0.2246 | - | | 0.0135 | 800 | 0.215 | - | | 0.0143 | 850 | 0.2102 | - | | 0.0152 | 900 | 0.2031 | - | | 0.0160 | 950 | 0.2017 | - | | 0.0169 | 1000 | 0.1898 | - | | 0.0177 | 1050 | 0.1869 | - | | 0.0185 | 1100 | 0.1852 | - | | 0.0194 | 1150 | 0.1818 | - | | 0.0202 | 1200 | 0.1729 | - | | 0.0211 | 1250 | 0.1691 | - | | 0.0219 | 1300 | 0.1726 | - | | 0.0228 | 1350 | 0.1578 | - | | 0.0236 | 1400 | 0.1544 | - | | 0.0244 | 1450 | 0.1484 | - | | 0.0253 | 1500 | 0.1443 | - | | 0.0261 | 1550 | 0.1367 | - | | 0.0270 | 1600 | 0.1326 | - | | 0.0278 | 1650 | 0.1181 | - | | 0.0286 | 1700 | 0.1317 | - | | 0.0295 | 1750 | 0.1202 | - | | 0.0303 | 1800 | 0.1098 | - | | 0.0312 | 1850 | 0.1064 | - | | 0.0320 | 1900 | 0.1074 | - | | 0.0329 | 1950 | 0.1082 | - | | 0.0337 | 2000 | 0.0996 | - | | 0.0345 | 2050 | 0.0992 | - | | 0.0354 | 2100 | 0.1007 | - | | 0.0362 | 2150 | 0.0924 | - | | 0.0371 | 2200 | 0.0967 | - | | 0.0379 | 2250 | 0.087 | - | | 0.0388 | 2300 | 0.0795 | - | | 0.0396 | 2350 | 0.0816 | - | | 0.0404 | 2400 | 0.0758 | - | | 0.0413 | 2450 | 0.0741 | - | | 0.0421 | 2500 | 0.0765 | - | | 0.0430 | 2550 | 0.0724 | - | | 0.0438 | 2600 | 0.0669 | - | | 0.0447 | 2650 | 0.0537 | - | | 0.0455 | 2700 | 0.0673 | - | | 0.0463 | 2750 | 0.0587 | - | | 0.0472 | 2800 | 0.0615 | - | | 0.0480 | 2850 | 0.0571 | - | | 0.0489 | 2900 | 0.0563 | - | | 0.0497 | 2950 | 0.0487 | - | | 0.0506 | 3000 | 0.0457 | - | | 0.0514 | 3050 | 0.0528 | - | | 0.0522 | 3100 | 0.0448 | - | | 0.0531 | 3150 | 0.0491 | - | | 0.0539 | 3200 | 0.0435 | - | | 0.0548 | 3250 | 0.0393 | - | | 0.0556 | 3300 | 0.0346 | - | | 0.0565 | 3350 | 0.0392 | - | | 0.0573 | 3400 | 0.0409 | - | | 0.0581 | 3450 | 0.0315 | - | | 0.0590 | 3500 | 0.0439 | - | | 0.0598 | 3550 | 0.0315 | - | | 0.0607 | 3600 | 0.0284 | - | | 0.0615 | 3650 | 0.0306 | - | | 0.0624 | 3700 | 0.0293 | - | | 0.0632 | 3750 | 0.031 | - | | 0.0640 | 3800 | 0.03 | - | | 0.0649 | 3850 | 0.0282 | - | | 0.0657 | 3900 | 0.0327 | - | | 0.0666 | 3950 | 0.0245 | - | | 0.0674 | 4000 | 0.0264 | - | | 0.0683 | 4050 | 0.0211 | - | | 0.0691 | 4100 | 0.0236 | - | | 0.0699 | 4150 | 0.0218 | - | | 0.0708 | 4200 | 0.0189 | - | | 0.0716 | 4250 | 0.0201 | - | | 0.0725 | 4300 | 0.0205 | - | | 0.0733 | 4350 | 0.0192 | - | | 0.0742 | 4400 | 0.0177 | - | | 0.0750 | 4450 | 0.0155 | - | | 0.0758 | 4500 | 0.0188 | - | | 0.0767 | 4550 | 0.016 | - | | 0.0775 | 4600 | 0.0185 | - | | 0.0784 | 4650 | 0.0171 | - | | 0.0792 | 4700 | 0.0153 | - | | 0.0800 | 4750 | 0.0133 | - | | 0.0809 | 4800 | 0.0148 | - | | 0.0817 | 4850 | 0.0134 | - | | 0.0826 | 4900 | 0.0141 | - | | 0.0834 | 4950 | 0.0084 | - | | 0.0843 | 5000 | 0.0133 | - | | 0.0851 | 5050 | 0.0093 | - | | 0.0859 | 5100 | 0.0083 | - | | 0.0868 | 5150 | 0.0128 | - | | 0.0876 | 5200 | 0.0111 | - | | 0.0885 | 5250 | 0.0097 | - | | 0.0893 | 5300 | 0.0088 | - | | 0.0902 | 5350 | 0.0065 | - | | 0.0910 | 5400 | 0.0093 | - | | 0.0918 | 5450 | 0.0061 | - | | 0.0927 | 5500 | 0.0067 | - | | 0.0935 | 5550 | 0.0067 | - | | 0.0944 | 5600 | 0.0101 | - | | 0.0952 | 5650 | 0.0076 | - | | 0.0961 | 5700 | 0.0043 | - | | 0.0969 | 5750 | 0.0046 | - | | 0.0977 | 5800 | 0.0079 | - | | 0.0986 | 5850 | 0.0036 | - | | 0.0994 | 5900 | 0.004 | - | | 0.1003 | 5950 | 0.0043 | - | | 0.1011 | 6000 | 0.0043 | - | | 0.1020 | 6050 | 0.0059 | - | | 0.1028 | 6100 | 0.0063 | - | | 0.1036 | 6150 | 0.0058 | - | | 0.1045 | 6200 | 0.0067 | - | | 0.1053 | 6250 | 0.0043 | - | | 0.1062 | 6300 | 0.0038 | - | | 0.1070 | 6350 | 0.0032 | - | | 0.1079 | 6400 | 0.0024 | - | | 0.1087 | 6450 | 0.0037 | - | | 0.1095 | 6500 | 0.0034 | - | | 0.1104 | 6550 | 0.0032 | - | | 0.1112 | 6600 | 0.0037 | - | | 0.1121 | 6650 | 0.0034 | - | | 0.1129 | 6700 | 0.0022 | - | | 0.1138 | 6750 | 0.0025 | - | | 0.1146 | 6800 | 0.002 | - | | 0.1154 | 6850 | 0.0028 | - | | 0.1163 | 6900 | 0.0022 | - | | 0.1171 | 6950 | 0.0022 | - | | 0.1180 | 7000 | 0.0018 | - | | 0.1188 | 7050 | 0.0014 | - | | 0.1197 | 7100 | 0.0027 | - | | 0.1205 | 7150 | 0.0014 | - | | 0.1213 | 7200 | 0.0013 | - | | 0.1222 | 7250 | 0.001 | - | | 0.1230 | 7300 | 0.001 | - | | 0.1239 | 7350 | 0.0011 | - | | 0.1247 | 7400 | 0.0019 | - | | 0.1255 | 7450 | 0.0011 | - | | 0.1264 | 7500 | 0.0007 | - | | 0.1272 | 7550 | 0.0007 | - | | 0.1281 | 7600 | 0.0008 | - | | 0.1289 | 7650 | 0.0007 | - | | 0.1298 | 7700 | 0.0014 | - | | 0.1306 | 7750 | 0.0006 | - | | 0.1314 | 7800 | 0.0008 | - | | 0.1323 | 7850 | 0.0005 | - | | 0.1331 | 7900 | 0.0013 | - | | 0.1340 | 7950 | 0.0006 | - | | 0.1348 | 8000 | 0.001 | - | | 0.1357 | 8050 | 0.001 | - | | 0.1365 | 8100 | 0.0013 | - | | 0.1373 | 8150 | 0.0006 | - | | 0.1382 | 8200 | 0.0005 | - | | 0.1390 | 8250 | 0.0009 | - | | 0.1399 | 8300 | 0.0008 | - | | 0.1407 | 8350 | 0.0005 | - | | 0.1416 | 8400 | 0.0008 | - | | 0.1424 | 8450 | 0.0009 | - | | 0.1432 | 8500 | 0.0007 | - | | 0.1441 | 8550 | 0.0011 | - | | 0.0000 | 1 | 0.0003 | - | | 0.0008 | 50 | 0.0005 | - | | 0.0017 | 100 | 0.0005 | - | | 0.0025 | 150 | 0.0006 | - | | 0.0034 | 200 | 0.0008 | - | | 0.0042 | 250 | 0.0008 | - | | 0.0051 | 300 | 0.0005 | - | | 0.0059 | 350 | 0.0006 | - | | 0.0067 | 400 | 0.0004 | - | | 0.0076 | 450 | 0.0012 | - | | 0.0084 | 500 | 0.0007 | - | | 0.0093 | 550 | 0.0005 | - | | 0.0101 | 600 | 0.0008 | - | | 0.0110 | 650 | 0.001 | - | | 0.0118 | 700 | 0.0005 | - | | 0.0126 | 750 | 0.0004 | - | | 0.0135 | 800 | 0.0005 | - | | 0.0143 | 850 | 0.0005 | - | | 0.0152 | 900 | 0.0005 | - | | 0.0160 | 950 | 0.0004 | - | | 0.0169 | 1000 | 0.0009 | - | | 0.0177 | 1050 | 0.0012 | - | | 0.0185 | 1100 | 0.0005 | - | | 0.0194 | 1150 | 0.0006 | - | | 0.0202 | 1200 | 0.0004 | - | | 0.0211 | 1250 | 0.0004 | - | | 0.0219 | 1300 | 0.0005 | - | | 0.0228 | 1350 | 0.001 | - | | 0.0236 | 1400 | 0.0005 | - | | 0.0244 | 1450 | 0.0005 | - | | 0.0253 | 1500 | 0.0005 | - | | 0.0261 | 1550 | 0.0005 | - | | 0.0270 | 1600 | 0.0006 | - | | 0.0278 | 1650 | 0.0013 | - | | 0.0286 | 1700 | 0.0006 | - | | 0.0295 | 1750 | 0.0006 | - | | 0.0303 | 1800 | 0.0004 | - | | 0.0312 | 1850 | 0.0003 | - | | 0.0320 | 1900 | 0.0006 | - | | 0.0329 | 1950 | 0.0003 | - | | 0.0337 | 2000 | 0.0003 | - | | 0.0345 | 2050 | 0.0009 | - | | 0.0354 | 2100 | 0.0003 | - | | 0.0362 | 2150 | 0.0005 | - | | 0.0371 | 2200 | 0.0006 | - | | 0.0379 | 2250 | 0.001 | - | | 0.0388 | 2300 | 0.0004 | - | | 0.0396 | 2350 | 0.0003 | - | | 0.0404 | 2400 | 0.0005 | - | | 0.0413 | 2450 | 0.0003 | - | | 0.0421 | 2500 | 0.0004 | - | | 0.0430 | 2550 | 0.0003 | - | | 0.0438 | 2600 | 0.0011 | - | | 0.0447 | 2650 | 0.0007 | - | | 0.0455 | 2700 | 0.0008 | - | | 0.0463 | 2750 | 0.0005 | - | | 0.0472 | 2800 | 0.0004 | - | | 0.0480 | 2850 | 0.0018 | - | | 0.0489 | 2900 | 0.001 | - | | 0.0497 | 2950 | 0.0005 | - | | 0.0506 | 3000 | 0.0005 | - | | 0.0514 | 3050 | 0.0017 | - | | 0.0522 | 3100 | 0.0003 | - | | 0.0531 | 3150 | 0.0008 | - | | 0.0539 | 3200 | 0.0008 | - | | 0.0548 | 3250 | 0.0007 | - | | 0.0556 | 3300 | 0.0006 | - | | 0.0565 | 3350 | 0.0004 | - | | 0.0573 | 3400 | 0.0005 | - | | 0.0581 | 3450 | 0.0008 | - | | 0.0590 | 3500 | 0.0009 | - | | 0.0598 | 3550 | 0.0005 | - | | 0.0607 | 3600 | 0.0014 | - | | 0.0615 | 3650 | 0.0004 | - | | 0.0624 | 3700 | 0.0003 | - | | 0.0632 | 3750 | 0.0005 | - | | 0.0640 | 3800 | 0.0006 | - | | 0.0649 | 3850 | 0.0005 | - | | 0.0657 | 3900 | 0.0013 | - | | 0.0666 | 3950 | 0.0003 | - | | 0.0674 | 4000 | 0.0004 | - | | 0.0683 | 4050 | 0.0002 | - | | 0.0691 | 4100 | 0.0006 | - | | 0.0699 | 4150 | 0.0007 | - | | 0.0708 | 4200 | 0.0007 | - | | 0.0716 | 4250 | 0.0006 | - | | 0.0725 | 4300 | 0.0003 | - | | 0.0733 | 4350 | 0.0009 | - | | 0.0742 | 4400 | 0.0009 | - | | 0.0750 | 4450 | 0.0003 | - | | 0.0758 | 4500 | 0.0003 | - | | 0.0767 | 4550 | 0.0006 | - | | 0.0775 | 4600 | 0.0002 | - | | 0.0784 | 4650 | 0.0006 | - | | 0.0792 | 4700 | 0.0006 | - | | 0.0800 | 4750 | 0.0002 | - | | 0.0809 | 4800 | 0.0003 | - | | 0.0817 | 4850 | 0.0008 | - | | 0.0826 | 4900 | 0.0004 | - | | 0.0834 | 4950 | 0.0002 | - | | 0.0843 | 5000 | 0.0009 | - | | 0.0851 | 5050 | 0.0008 | - | | 0.0859 | 5100 | 0.0004 | - | | 0.0868 | 5150 | 0.0007 | - | | 0.0876 | 5200 | 0.0005 | - | | 0.0885 | 5250 | 0.0002 | - | | 0.0893 | 5300 | 0.0004 | - | | 0.0902 | 5350 | 0.0002 | - | | 0.0910 | 5400 | 0.0005 | - | | 0.0918 | 5450 | 0.0005 | - | | 0.0927 | 5500 | 0.0002 | - | | 0.0935 | 5550 | 0.0005 | - | | 0.0944 | 5600 | 0.0017 | - | | 0.0952 | 5650 | 0.0003 | - | | 0.0961 | 5700 | 0.0003 | - | | 0.0969 | 5750 | 0.0006 | - | | 0.0977 | 5800 | 0.0002 | - | | 0.0986 | 5850 | 0.0002 | - | | 0.0994 | 5900 | 0.0002 | - | | 0.1003 | 5950 | 0.0014 | - | | 0.1011 | 6000 | 0.0002 | - | | 0.1020 | 6050 | 0.0012 | - | | 0.1028 | 6100 | 0.0011 | - | | 0.1036 | 6150 | 0.0004 | - | | 0.1045 | 6200 | 0.0006 | - | | 0.1053 | 6250 | 0.0007 | - | | 0.1062 | 6300 | 0.0003 | - | | 0.1070 | 6350 | 0.0005 | - | | 0.1079 | 6400 | 0.0004 | - | | 0.1087 | 6450 | 0.0002 | - | | 0.1095 | 6500 | 0.0003 | - | | 0.1104 | 6550 | 0.0001 | - | | 0.1112 | 6600 | 0.0009 | - | | 0.1121 | 6650 | 0.0002 | - | | 0.1129 | 6700 | 0.0001 | - | | 0.1138 | 6750 | 0.0004 | - | | 0.1146 | 6800 | 0.0002 | - | | 0.1154 | 6850 | 0.0005 | - | | 0.1163 | 6900 | 0.0008 | - | | 0.1171 | 6950 | 0.0012 | - | | 0.1180 | 7000 | 0.0005 | - | | 0.1188 | 7050 | 0.0005 | - | | 0.1197 | 7100 | 0.0015 | - | | 0.1205 | 7150 | 0.0004 | - | | 0.1213 | 7200 | 0.0004 | - | | 0.1222 | 7250 | 0.0002 | - | | 0.1230 | 7300 | 0.0002 | - | | 0.1239 | 7350 | 0.0005 | - | | 0.1247 | 7400 | 0.0008 | - | | 0.1255 | 7450 | 0.0004 | - | | 0.1264 | 7500 | 0.0001 | - | | 0.1272 | 7550 | 0.0001 | - | | 0.1281 | 7600 | 0.0004 | - | | 0.1289 | 7650 | 0.0002 | - | | 0.1298 | 7700 | 0.0006 | - | | 0.1306 | 7750 | 0.0012 | - | | 0.1314 | 7800 | 0.0003 | - | | 0.1323 | 7850 | 0.0001 | - | | 0.1331 | 7900 | 0.0006 | - | | 0.1340 | 7950 | 0.0001 | - | | 0.1348 | 8000 | 0.0001 | - | | 0.1357 | 8050 | 0.0009 | - | | 0.1365 | 8100 | 0.0007 | - | | 0.1373 | 8150 | 0.0002 | - | | 0.1382 | 8200 | 0.0001 | - | | 0.1390 | 8250 | 0.0005 | - | | 0.1399 | 8300 | 0.0004 | - | | 0.1407 | 8350 | 0.0002 | - | | 0.1416 | 8400 | 0.0004 | - | | 0.1424 | 8450 | 0.0003 | - | | 0.1432 | 8500 | 0.0004 | - | | 0.1441 | 8550 | 0.0005 | - | | 0.1449 | 8600 | 0.0004 | - | | 0.1458 | 8650 | 0.0002 | - | | 0.1466 | 8700 | 0.0001 | - | | 0.1475 | 8750 | 0.0009 | - | | 0.1483 | 8800 | 0.0001 | - | | 0.1491 | 8850 | 0.0001 | - | | 0.1500 | 8900 | 0.0006 | - | | 0.1508 | 8950 | 0.0004 | - | | 0.1517 | 9000 | 0.0001 | - | | 0.1525 | 9050 | 0.0011 | - | | 0.1534 | 9100 | 0.0005 | - | | 0.1542 | 9150 | 0.0004 | - | | 0.1550 | 9200 | 0.0002 | - | | 0.1559 | 9250 | 0.001 | - | | 0.1567 | 9300 | 0.0005 | - | | 0.1576 | 9350 | 0.0004 | - | | 0.1584 | 9400 | 0.0001 | - | | 0.1593 | 9450 | 0.0001 | - | | 0.1601 | 9500 | 0.0011 | - | | 0.1609 | 9550 | 0.0006 | - | | 0.1618 | 9600 | 0.0001 | - | | 0.1626 | 9650 | 0.001 | - | | 0.1635 | 9700 | 0.0003 | - | | 0.1643 | 9750 | 0.0006 | - | | 0.1652 | 9800 | 0.0001 | - | | 0.1660 | 9850 | 0.0001 | - | | 0.1668 | 9900 | 0.0008 | - | | 0.1677 | 9950 | 0.0015 | - | | 0.1685 | 10000 | 0.0003 | - | | 0.1694 | 10050 | 0.0004 | - | | 0.1702 | 10100 | 0.0011 | - | | 0.1711 | 10150 | 0.0003 | - | | 0.1719 | 10200 | 0.0003 | - | | 0.1727 | 10250 | 0.0002 | - | | 0.1736 | 10300 | 0.0001 | - | | 0.1744 | 10350 | 0.0005 | - | | 0.1753 | 10400 | 0.0006 | - | | 0.1761 | 10450 | 0.0005 | - | | 0.1769 | 10500 | 0.0001 | - | | 0.1778 | 10550 | 0.0002 | - | | 0.1786 | 10600 | 0.0001 | - | | 0.1795 | 10650 | 0.0003 | - | | 0.1803 | 10700 | 0.0001 | - | | 0.1812 | 10750 | 0.0003 | - | | 0.1820 | 10800 | 0.0004 | - | | 0.1828 | 10850 | 0.0002 | - | | 0.1837 | 10900 | 0.0002 | - | | 0.1845 | 10950 | 0.0003 | - | | 0.1854 | 11000 | 0.0004 | - | | 0.1862 | 11050 | 0.0001 | - | | 0.1871 | 11100 | 0.0025 | - | | 0.1879 | 11150 | 0.0005 | - | | 0.1887 | 11200 | 0.0093 | - | | 0.1896 | 11250 | 0.0106 | - | | 0.1904 | 11300 | 0.0135 | - | | 0.1913 | 11350 | 0.0035 | - | | 0.1921 | 11400 | 0.006 | - | | 0.1930 | 11450 | 0.0019 | - | | 0.1938 | 11500 | 0.0019 | - | | 0.1946 | 11550 | 0.003 | - | | 0.1955 | 11600 | 0.0034 | - | | 0.1963 | 11650 | 0.0008 | - | | 0.1972 | 11700 | 0.0015 | - | | 0.1980 | 11750 | 0.0006 | - | | 0.1989 | 11800 | 0.0015 | - | | 0.1997 | 11850 | 0.0004 | - | | 0.2005 | 11900 | 0.0006 | - | | 0.2014 | 11950 | 0.0002 | - | | 0.2022 | 12000 | 0.0005 | - | | 0.2031 | 12050 | 0.0004 | - | | 0.2039 | 12100 | 0.0005 | - | | 0.2048 | 12150 | 0.0004 | - | | 0.2056 | 12200 | 0.0002 | - | | 0.2064 | 12250 | 0.0001 | - | | 0.2073 | 12300 | 0.0005 | - | | 0.2081 | 12350 | 0.0003 | - | | 0.2090 | 12400 | 0.0002 | - | | 0.2098 | 12450 | 0.0002 | - | | 0.2107 | 12500 | 0.0004 | - | | 0.2115 | 12550 | 0.0003 | - | | 0.2123 | 12600 | 0.0005 | - | | 0.2132 | 12650 | 0.0002 | - | | 0.2140 | 12700 | 0.0001 | - | | 0.2149 | 12750 | 0.0001 | - | | 0.2157 | 12800 | 0.0001 | - | | 0.2166 | 12850 | 0.0001 | - | | 0.2174 | 12900 | 0.0004 | - | | 0.2182 | 12950 | 0.0004 | - | | 0.2191 | 13000 | 0.0003 | - | | 0.2199 | 13050 | 0.0003 | - | | 0.2208 | 13100 | 0.0001 | - | | 0.2216 | 13150 | 0.0004 | - | | 0.2225 | 13200 | 0.0008 | - | | 0.2233 | 13250 | 0.0005 | - | | 0.2241 | 13300 | 0.0003 | - | | 0.2250 | 13350 | 0.0002 | - | | 0.2258 | 13400 | 0.0006 | - | | 0.2267 | 13450 | 0.0018 | - | | 0.2275 | 13500 | 0.0001 | - | | 0.2283 | 13550 | 0.0002 | - | | 0.2292 | 13600 | 0.0003 | - | | 0.2300 | 13650 | 0.0001 | - | | 0.2309 | 13700 | 0.0004 | - | | 0.2317 | 13750 | 0.0001 | - | | 0.2326 | 13800 | 0.0004 | - | | 0.2334 | 13850 | 0.0003 | - | | 0.2342 | 13900 | 0.0002 | - | | 0.2351 | 13950 | 0.0002 | - | | 0.2359 | 14000 | 0.0001 | - | | 0.2368 | 14050 | 0.0006 | - | | 0.2376 | 14100 | 0.0005 | - | | 0.2385 | 14150 | 0.0006 | - | | 0.2393 | 14200 | 0.0001 | - | | 0.2401 | 14250 | 0.0002 | - | | 0.2410 | 14300 | 0.0003 | - | | 0.2418 | 14350 | 0.0003 | - | | 0.2427 | 14400 | 0.0002 | - | | 0.2435 | 14450 | 0.0001 | - | | 0.2444 | 14500 | 0.0007 | - | | 0.2452 | 14550 | 0.0003 | - | | 0.2460 | 14600 | 0.0002 | - | | 0.2469 | 14650 | 0.0003 | - | | 0.2477 | 14700 | 0.0001 | - | | 0.2486 | 14750 | 0.0002 | - | | 0.2494 | 14800 | 0.0001 | - | | 0.2503 | 14850 | 0.0001 | - | | 0.2511 | 14900 | 0.0005 | - | | 0.2519 | 14950 | 0.0001 | - | | 0.2528 | 15000 | 0.0013 | - | | 0.2536 | 15050 | 0.0012 | - | | 0.2545 | 15100 | 0.0008 | - | | 0.2553 | 15150 | 0.0003 | - | | 0.2562 | 15200 | 0.0002 | - | | 0.2570 | 15250 | 0.0001 | - | | 0.2578 | 15300 | 0.0001 | - | | 0.2587 | 15350 | 0.0001 | - | | 0.2595 | 15400 | 0.0005 | - | | 0.2604 | 15450 | 0.0002 | - | | 0.2612 | 15500 | 0.0002 | - | | 0.2621 | 15550 | 0.0001 | - | | 0.2629 | 15600 | 0.0006 | - | | 0.2637 | 15650 | 0.0007 | - | | 0.2646 | 15700 | 0.001 | - | | 0.2654 | 15750 | 0.0025 | - | | 0.2663 | 15800 | 0.0077 | - | | 0.2671 | 15850 | 0.0095 | - | | 0.2680 | 15900 | 0.0018 | - | | 0.2688 | 15950 | 0.0014 | - | | 0.2696 | 16000 | 0.0034 | - | | 0.2705 | 16050 | 0.0015 | - | | 0.2713 | 16100 | 0.0006 | - | | 0.2722 | 16150 | 0.0028 | - | | 0.2730 | 16200 | 0.0008 | - | | 0.2739 | 16250 | 0.0006 | - | | 0.2747 | 16300 | 0.0005 | - | | 0.2755 | 16350 | 0.0002 | - | | 0.2764 | 16400 | 0.0012 | - | | 0.2772 | 16450 | 0.001 | - | | 0.2781 | 16500 | 0.0008 | - | | 0.2789 | 16550 | 0.0017 | - | | 0.2797 | 16600 | 0.0026 | - | | 0.2806 | 16650 | 0.0005 | - | | 0.2814 | 16700 | 0.0001 | - | | 0.2823 | 16750 | 0.0003 | - | | 0.2831 | 16800 | 0.0005 | - | | 0.2840 | 16850 | 0.0023 | - | | 0.2848 | 16900 | 0.0031 | - | | 0.2856 | 16950 | 0.0042 | - | | 0.2865 | 17000 | 0.0021 | - | | 0.2873 | 17050 | 0.0017 | - | | 0.2882 | 17100 | 0.0008 | - | | 0.2890 | 17150 | 0.0002 | - | | 0.2899 | 17200 | 0.0004 | - | | 0.2907 | 17250 | 0.0002 | - | | 0.2915 | 17300 | 0.0015 | - | | 0.2924 | 17350 | 0.0004 | - | | 0.2932 | 17400 | 0.0001 | - | | 0.2941 | 17450 | 0.0001 | - | | 0.2949 | 17500 | 0.001 | - | | 0.2958 | 17550 | 0.001 | - | | 0.2966 | 17600 | 0.0004 | - | | 0.2974 | 17650 | 0.0003 | - | | 0.2983 | 17700 | 0.0001 | - | | 0.2991 | 17750 | 0.0002 | - | | 0.3000 | 17800 | 0.0012 | - | | 0.3008 | 17850 | 0.0006 | - | | 0.3017 | 17900 | 0.0003 | - | | 0.3025 | 17950 | 0.0003 | - | | 0.3033 | 18000 | 0.0002 | - | | 0.3042 | 18050 | 0.0003 | - | | 0.3050 | 18100 | 0.0002 | - | | 0.3059 | 18150 | 0.0002 | - | | 0.3067 | 18200 | 0.0009 | - | | 0.3076 | 18250 | 0.0003 | - | | 0.3084 | 18300 | 0.0004 | - | | 0.3092 | 18350 | 0.0002 | - | | 0.3101 | 18400 | 0.0001 | - | | 0.3109 | 18450 | 0.0009 | - | | 0.3118 | 18500 | 0.0001 | - | | 0.3126 | 18550 | 0.0009 | - | | 0.3135 | 18600 | 0.0003 | - | | 0.3143 | 18650 | 0.0002 | - | | 0.3151 | 18700 | 0.0001 | - | | 0.3160 | 18750 | 0.0004 | - | | 0.3168 | 18800 | 0.0009 | - | | 0.3177 | 18850 | 0.0002 | - | | 0.3185 | 18900 | 0.0003 | - | | 0.3194 | 18950 | 0.0001 | - | | 0.3202 | 19000 | 0.0004 | - | | 0.3210 | 19050 | 0.001 | - | | 0.3219 | 19100 | 0.0003 | - | | 0.3227 | 19150 | 0.0001 | - | | 0.3236 | 19200 | 0.0001 | - | | 0.3244 | 19250 | 0.0001 | - | | 0.3252 | 19300 | 0.0003 | - | | 0.3261 | 19350 | 0.0001 | - | | 0.3269 | 19400 | 0.0001 | - | | 0.3278 | 19450 | 0.0005 | - | | 0.3286 | 19500 | 0.0001 | - | | 0.3295 | 19550 | 0.0001 | - | | 0.3303 | 19600 | 0.0001 | - | | 0.3311 | 19650 | 0.0004 | - | | 0.3320 | 19700 | 0.0003 | - | | 0.3328 | 19750 | 0.0002 | - | | 0.3337 | 19800 | 0.0004 | - | | 0.3345 | 19850 | 0.0002 | - | | 0.3354 | 19900 | 0.0002 | - | | 0.3362 | 19950 | 0.0003 | - | | 0.3370 | 20000 | 0.0004 | - | | 0.3379 | 20050 | 0.0004 | - | | 0.3387 | 20100 | 0.0003 | - | | 0.3396 | 20150 | 0.0001 | - | | 0.3404 | 20200 | 0.0002 | - | | 0.3413 | 20250 | 0.0002 | - | | 0.3421 | 20300 | 0.0002 | - | | 0.3429 | 20350 | 0.0 | - | | 0.3438 | 20400 | 0.0003 | - | | 0.3446 | 20450 | 0.0006 | - | | 0.3455 | 20500 | 0.0003 | - | | 0.3463 | 20550 | 0.0004 | - | | 0.3472 | 20600 | 0.0002 | - | | 0.3480 | 20650 | 0.0001 | - | | 0.3488 | 20700 | 0.0002 | - | | 0.3497 | 20750 | 0.0004 | - | | 0.3505 | 20800 | 0.0003 | - | | 0.3514 | 20850 | 0.0001 | - | | 0.3522 | 20900 | 0.0003 | - | | 0.3531 | 20950 | 0.0004 | - | | 0.3539 | 21000 | 0.0004 | - | | 0.3547 | 21050 | 0.0003 | - | | 0.3556 | 21100 | 0.0002 | - | | 0.3564 | 21150 | 0.0001 | - | | 0.3573 | 21200 | 0.0001 | - | | 0.3581 | 21250 | 0.0004 | - | | 0.3590 | 21300 | 0.0002 | - | | 0.3598 | 21350 | 0.0001 | - | | 0.3606 | 21400 | 0.0007 | - | | 0.3615 | 21450 | 0.0005 | - | | 0.3623 | 21500 | 0.0003 | - | | 0.3632 | 21550 | 0.0002 | - | | 0.3640 | 21600 | 0.0001 | - | | 0.3649 | 21650 | 0.0007 | - | | 0.3657 | 21700 | 0.0001 | - | | 0.3665 | 21750 | 0.0004 | - | | 0.3674 | 21800 | 0.0001 | - | | 0.3682 | 21850 | 0.0002 | - | | 0.3691 | 21900 | 0.0002 | - | | 0.3699 | 21950 | 0.0002 | - | | 0.3708 | 22000 | 0.001 | - | | 0.3716 | 22050 | 0.0002 | - | | 0.3724 | 22100 | 0.0003 | - | | 0.3733 | 22150 | 0.0001 | - | | 0.3741 | 22200 | 0.0002 | - | | 0.3750 | 22250 | 0.0004 | - | | 0.3758 | 22300 | 0.0003 | - | | 0.3766 | 22350 | 0.0002 | - | | 0.3775 | 22400 | 0.0003 | - | | 0.3783 | 22450 | 0.0011 | - | | 0.3792 | 22500 | 0.0003 | - | | 0.3800 | 22550 | 0.0002 | - | | 0.3809 | 22600 | 0.0 | - | | 0.3817 | 22650 | 0.0001 | - | | 0.3825 | 22700 | 0.0003 | - | | 0.3834 | 22750 | 0.0003 | - | | 0.3842 | 22800 | 0.0003 | - | | 0.3851 | 22850 | 0.0003 | - | | 0.3859 | 22900 | 0.0002 | - | | 0.3868 | 22950 | 0.0001 | - | | 0.3876 | 23000 | 0.0012 | - | | 0.3884 | 23050 | 0.0003 | - | | 0.3893 | 23100 | 0.0005 | - | | 0.3901 | 23150 | 0.0017 | - | | 0.3910 | 23200 | 0.0005 | - | | 0.3918 | 23250 | 0.0002 | - | | 0.3927 | 23300 | 0.0001 | - | | 0.3935 | 23350 | 0.0127 | - | | 0.3943 | 23400 | 0.0261 | - | | 0.3952 | 23450 | 0.0199 | - | | 0.3960 | 23500 | 0.0092 | - | | 0.3969 | 23550 | 0.005 | - | | 0.3977 | 23600 | 0.0046 | - | | 0.3986 | 23650 | 0.0053 | - | | 0.3994 | 23700 | 0.0015 | - | | 0.4002 | 23750 | 0.0027 | - | | 0.4011 | 23800 | 0.0029 | - | | 0.4019 | 23850 | 0.0009 | - | | 0.4028 | 23900 | 0.0005 | - | | 0.4036 | 23950 | 0.0014 | - | | 0.4045 | 24000 | 0.0001 | - | | 0.4053 | 24050 | 0.0003 | - | | 0.4061 | 24100 | 0.0007 | - | | 0.4070 | 24150 | 0.0007 | - | | 0.4078 | 24200 | 0.0002 | - | | 0.4087 | 24250 | 0.0007 | - | | 0.4095 | 24300 | 0.0001 | - | | 0.4104 | 24350 | 0.0004 | - | | 0.4112 | 24400 | 0.001 | - | | 0.4120 | 24450 | 0.0002 | - | | 0.4129 | 24500 | 0.0002 | - | | 0.4137 | 24550 | 0.0009 | - | | 0.4146 | 24600 | 0.0012 | - | | 0.4154 | 24650 | 0.0001 | - | | 0.4163 | 24700 | 0.0002 | - | | 0.4171 | 24750 | 0.0001 | - | | 0.4179 | 24800 | 0.0001 | - | | 0.4188 | 24850 | 0.0003 | - | | 0.4196 | 24900 | 0.0002 | - | | 0.4205 | 24950 | 0.0001 | - | | 0.4213 | 25000 | 0.0009 | - | | 0.4222 | 25050 | 0.0006 | - | | 0.4230 | 25100 | 0.0005 | - | | 0.4238 | 25150 | 0.0003 | - | | 0.4247 | 25200 | 0.0001 | - | | 0.4255 | 25250 | 0.0001 | - | | 0.4264 | 25300 | 0.0004 | - | | 0.4272 | 25350 | 0.0009 | - | | 0.4280 | 25400 | 0.0007 | - | | 0.4289 | 25450 | 0.0 | - | | 0.4297 | 25500 | 0.0003 | - | | 0.4306 | 25550 | 0.0001 | - | | 0.4314 | 25600 | 0.0001 | - | | 0.4323 | 25650 | 0.0 | - | | 0.4331 | 25700 | 0.0003 | - | | 0.4339 | 25750 | 0.0003 | - | | 0.4348 | 25800 | 0.0002 | - | | 0.4356 | 25850 | 0.0003 | - | | 0.4365 | 25900 | 0.0001 | - | | 0.4373 | 25950 | 0.0004 | - | | 0.4382 | 26000 | 0.0009 | - | | 0.4390 | 26050 | 0.0 | - | | 0.4398 | 26100 | 0.0001 | - | | 0.4407 | 26150 | 0.0002 | - | | 0.4415 | 26200 | 0.0003 | - | | 0.4424 | 26250 | 0.0001 | - | | 0.4432 | 26300 | 0.0 | - | | 0.4441 | 26350 | 0.0002 | - | | 0.4449 | 26400 | 0.0 | - | | 0.4457 | 26450 | 0.0003 | - | | 0.4466 | 26500 | 0.0004 | - | | 0.4474 | 26550 | 0.0021 | - | | 0.4483 | 26600 | 0.0011 | - | | 0.4491 | 26650 | 0.002 | - | | 0.4500 | 26700 | 0.002 | - | | 0.4508 | 26750 | 0.0009 | - | | 0.4516 | 26800 | 0.0002 | - | | 0.4525 | 26850 | 0.0001 | - | | 0.4533 | 26900 | 0.0002 | - | | 0.4542 | 26950 | 0.0008 | - | | 0.4550 | 27000 | 0.0001 | - | | 0.4559 | 27050 | 0.0005 | - | | 0.4567 | 27100 | 0.0016 | - | | 0.4575 | 27150 | 0.0025 | - | | 0.4584 | 27200 | 0.0032 | - | | 0.4592 | 27250 | 0.0007 | - | | 0.4601 | 27300 | 0.0013 | - | | 0.4609 | 27350 | 0.0003 | - | | 0.4618 | 27400 | 0.0001 | - | | 0.4626 | 27450 | 0.0004 | - | | 0.4634 | 27500 | 0.0003 | - | | 0.4643 | 27550 | 0.0009 | - | | 0.4651 | 27600 | 0.0004 | - | | 0.4660 | 27650 | 0.0 | - | | 0.4668 | 27700 | 0.0 | - | | 0.4677 | 27750 | 0.0001 | - | | 0.4685 | 27800 | 0.0 | - | | 0.4693 | 27850 | 0.0011 | - | | 0.4702 | 27900 | 0.0001 | - | | 0.4710 | 27950 | 0.0022 | - | | 0.4719 | 28000 | 0.0003 | - | | 0.4727 | 28050 | 0.0002 | - | | 0.4736 | 28100 | 0.0003 | - | | 0.4744 | 28150 | 0.0 | - | | 0.4752 | 28200 | 0.0001 | - | | 0.4761 | 28250 | 0.0001 | - | | 0.4769 | 28300 | 0.0002 | - | | 0.4778 | 28350 | 0.0004 | - | | 0.4786 | 28400 | 0.0 | - | | 0.4794 | 28450 | 0.0004 | - | | 0.4803 | 28500 | 0.0002 | - | | 0.4811 | 28550 | 0.0001 | - | | 0.4820 | 28600 | 0.0003 | - | | 0.4828 | 28650 | 0.0004 | - | | 0.4837 | 28700 | 0.0001 | - | | 0.4845 | 28750 | 0.0005 | - | | 0.4853 | 28800 | 0.0 | - | | 0.4862 | 28850 | 0.0007 | - | | 0.4870 | 28900 | 0.0002 | - | | 0.4879 | 28950 | 0.0002 | - | | 0.4887 | 29000 | 0.0003 | - | | 0.4896 | 29050 | 0.0011 | - | | 0.4904 | 29100 | 0.0004 | - | | 0.4912 | 29150 | 0.0003 | - | | 0.4921 | 29200 | 0.0001 | - | | 0.4929 | 29250 | 0.0003 | - | | 0.4938 | 29300 | 0.0 | - | | 0.4946 | 29350 | 0.0005 | - | | 0.4955 | 29400 | 0.0001 | - | | 0.4963 | 29450 | 0.0006 | - | | 0.4971 | 29500 | 0.0002 | - | | 0.4980 | 29550 | 0.0004 | - | | 0.4988 | 29600 | 0.0002 | - | | 0.4997 | 29650 | 0.0001 | - | | 0.5005 | 29700 | 0.0003 | - | | 0.5014 | 29750 | 0.0009 | - | | 0.5022 | 29800 | 0.0004 | - | | 0.5030 | 29850 | 0.0008 | - | | 0.5039 | 29900 | 0.0002 | - | | 0.5047 | 29950 | 0.0003 | - | | 0.5056 | 30000 | 0.0005 | - | | 0.5064 | 30050 | 0.0004 | - | | 0.5073 | 30100 | 0.0008 | - | | 0.5081 | 30150 | 0.0002 | - | | 0.5089 | 30200 | 0.0001 | - | | 0.5098 | 30250 | 0.0007 | - | | 0.5106 | 30300 | 0.0001 | - | | 0.5115 | 30350 | 0.0001 | - | | 0.5123 | 30400 | 0.0003 | - | | 0.5132 | 30450 | 0.0002 | - | | 0.5140 | 30500 | 0.0003 | - | | 0.5148 | 30550 | 0.0006 | - | | 0.5157 | 30600 | 0.001 | - | | 0.5165 | 30650 | 0.0004 | - | | 0.5174 | 30700 | 0.0001 | - | | 0.5182 | 30750 | 0.0005 | - | | 0.5191 | 30800 | 0.0002 | - | | 0.5199 | 30850 | 0.0055 | - | | 0.5207 | 30900 | 0.0369 | - | | 0.5216 | 30950 | 0.0274 | - | | 0.5224 | 31000 | 0.0226 | - | | 0.5233 | 31050 | 0.0174 | - | | 0.5241 | 31100 | 0.0235 | - | | 0.5249 | 31150 | 0.0196 | - | | 0.5258 | 31200 | 0.0222 | - | | 0.5266 | 31250 | 0.0121 | - | | 0.5275 | 31300 | 0.0134 | - | | 0.5283 | 31350 | 0.0114 | - | | 0.5292 | 31400 | 0.0077 | - | | 0.5300 | 31450 | 0.0114 | - | | 0.5308 | 31500 | 0.0033 | - | | 0.5317 | 31550 | 0.0068 | - | | 0.5325 | 31600 | 0.0057 | - | | 0.5334 | 31650 | 0.0019 | - | | 0.5342 | 31700 | 0.0078 | - | | 0.5351 | 31750 | 0.0025 | - | | 0.5359 | 31800 | 0.0032 | - | | 0.5367 | 31850 | 0.0026 | - | | 0.5376 | 31900 | 0.0014 | - | | 0.5384 | 31950 | 0.0013 | - | | 0.5393 | 32000 | 0.0013 | - | | 0.5401 | 32050 | 0.0025 | - | | 0.5410 | 32100 | 0.0025 | - | | 0.5418 | 32150 | 0.0014 | - | | 0.5426 | 32200 | 0.0004 | - | | 0.5435 | 32250 | 0.005 | - | | 0.5443 | 32300 | 0.0003 | - | | 0.5452 | 32350 | 0.0015 | - | | 0.5460 | 32400 | 0.0044 | - | | 0.5469 | 32450 | 0.0016 | - | | 0.5477 | 32500 | 0.0012 | - | | 0.5485 | 32550 | 0.0003 | - | | 0.5494 | 32600 | 0.0005 | - | | 0.5502 | 32650 | 0.0021 | - | | 0.5511 | 32700 | 0.0001 | - | | 0.5519 | 32750 | 0.0002 | - | | 0.5528 | 32800 | 0.0001 | - | | 0.5536 | 32850 | 0.0005 | - | | 0.5544 | 32900 | 0.0003 | - | | 0.5553 | 32950 | 0.0001 | - | | 0.5561 | 33000 | 0.0001 | - | | 0.5570 | 33050 | 0.0003 | - | | 0.5578 | 33100 | 0.0001 | - | | 0.5587 | 33150 | 0.0001 | - | | 0.5595 | 33200 | 0.0 | - | | 0.5603 | 33250 | 0.0005 | - | | 0.5612 | 33300 | 0.0003 | - | | 0.5620 | 33350 | 0.0001 | - | | 0.5629 | 33400 | 0.0004 | - | | 0.5637 | 33450 | 0.0004 | - | | 0.5646 | 33500 | 0.0003 | - | | 0.5654 | 33550 | 0.0008 | - | | 0.5662 | 33600 | 0.0002 | - | | 0.5671 | 33650 | 0.0006 | - | | 0.5679 | 33700 | 0.0 | - | | 0.5688 | 33750 | 0.0005 | - | | 0.5696 | 33800 | 0.0003 | - | | 0.5705 | 33850 | 0.0002 | - | | 0.5713 | 33900 | 0.0003 | - | | 0.5721 | 33950 | 0.001 | - | | 0.5730 | 34000 | 0.0004 | - | | 0.5738 | 34050 | 0.0009 | - | | 0.5747 | 34100 | 0.0003 | - | | 0.5755 | 34150 | 0.0003 | - | | 0.5763 | 34200 | 0.0 | - | | 0.5772 | 34250 | 0.0011 | - | | 0.5780 | 34300 | 0.0003 | - | | 0.5789 | 34350 | 0.0002 | - | | 0.5797 | 34400 | 0.0 | - | | 0.5806 | 34450 | 0.0002 | - | | 0.5814 | 34500 | 0.0001 | - | | 0.5822 | 34550 | 0.0023 | - | | 0.5831 | 34600 | 0.0018 | - | | 0.5839 | 34650 | 0.0011 | - | | 0.5848 | 34700 | 0.0015 | - | | 0.5856 | 34750 | 0.0046 | - | | 0.5865 | 34800 | 0.0024 | - | | 0.5873 | 34850 | 0.0021 | - | | 0.5881 | 34900 | 0.0013 | - | | 0.5890 | 34950 | 0.0012 | - | | 0.5898 | 35000 | 0.0026 | - | | 0.5907 | 35050 | 0.0026 | - | | 0.5915 | 35100 | 0.0008 | - | | 0.5924 | 35150 | 0.0022 | - | | 0.5932 | 35200 | 0.003 | - | | 0.5940 | 35250 | 0.0001 | - | | 0.5949 | 35300 | 0.0 | - | | 0.5957 | 35350 | 0.0015 | - | | 0.5966 | 35400 | 0.0 | - | | 0.5974 | 35450 | 0.0001 | - | | 0.5983 | 35500 | 0.001 | - | | 0.5991 | 35550 | 0.0001 | - | | 0.5999 | 35600 | 0.001 | - | | 0.6008 | 35650 | 0.0011 | - | | 0.6016 | 35700 | 0.0007 | - | | 0.6025 | 35750 | 0.0015 | - | | 0.6033 | 35800 | 0.0006 | - | | 0.6042 | 35850 | 0.0011 | - | | 0.6050 | 35900 | 0.0006 | - | | 0.6058 | 35950 | 0.0012 | - | | 0.6067 | 36000 | 0.0008 | - | | 0.6075 | 36050 | 0.0004 | - | | 0.6084 | 36100 | 0.0001 | - | | 0.6092 | 36150 | 0.0013 | - | | 0.6101 | 36200 | 0.0018 | - | | 0.6109 | 36250 | 0.0003 | - | | 0.6117 | 36300 | 0.0004 | - | | 0.6126 | 36350 | 0.0004 | - | | 0.6134 | 36400 | 0.0011 | - | | 0.6143 | 36450 | 0.0003 | - | | 0.6151 | 36500 | 0.0001 | - | | 0.6160 | 36550 | 0.0 | - | | 0.6168 | 36600 | 0.0003 | - | | 0.6176 | 36650 | 0.0002 | - | | 0.6185 | 36700 | 0.0001 | - | | 0.6193 | 36750 | 0.0014 | - | | 0.6202 | 36800 | 0.0014 | - | | 0.6210 | 36850 | 0.0016 | - | | 0.6219 | 36900 | 0.002 | - | | 0.6227 | 36950 | 0.0005 | - | | 0.6235 | 37000 | 0.0013 | - | | 0.6244 | 37050 | 0.0005 | - | | 0.6252 | 37100 | 0.0016 | - | | 0.6261 | 37150 | 0.0025 | - | | 0.6269 | 37200 | 0.0006 | - | | 0.6277 | 37250 | 0.0002 | - | | 0.6286 | 37300 | 0.0013 | - | | 0.6294 | 37350 | 0.0005 | - | | 0.6303 | 37400 | 0.0012 | - | | 0.6311 | 37450 | 0.0004 | - | | 0.6320 | 37500 | 0.0007 | - | | 0.6328 | 37550 | 0.0005 | - | | 0.6336 | 37600 | 0.0053 | - | | 0.6345 | 37650 | 0.0039 | - | | 0.6353 | 37700 | 0.0076 | - | | 0.6362 | 37750 | 0.0068 | - | | 0.6370 | 37800 | 0.0059 | - | | 0.6379 | 37850 | 0.0031 | - | | 0.6387 | 37900 | 0.0022 | - | | 0.6395 | 37950 | 0.0007 | - | | 0.6404 | 38000 | 0.0002 | - | | 0.6412 | 38050 | 0.0027 | - | | 0.6421 | 38100 | 0.0003 | - | | 0.6429 | 38150 | 0.0027 | - | | 0.6438 | 38200 | 0.0014 | - | | 0.6446 | 38250 | 0.0002 | - | | 0.6454 | 38300 | 0.0 | - | | 0.6463 | 38350 | 0.0009 | - | | 0.6471 | 38400 | 0.001 | - | | 0.6480 | 38450 | 0.0001 | - | | 0.6488 | 38500 | 0.0001 | - | | 0.6497 | 38550 | 0.0002 | - | | 0.6505 | 38600 | 0.0 | - | | 0.6513 | 38650 | 0.0 | - | | 0.6522 | 38700 | 0.0001 | - | | 0.6530 | 38750 | 0.0017 | - | | 0.6539 | 38800 | 0.001 | - | | 0.6547 | 38850 | 0.0002 | - | | 0.6556 | 38900 | 0.0002 | - | | 0.6564 | 38950 | 0.0 | - | | 0.6572 | 39000 | 0.0 | - | | 0.6581 | 39050 | 0.0002 | - | | 0.6589 | 39100 | 0.0 | - | | 0.6598 | 39150 | 0.0 | - | | 0.6606 | 39200 | 0.0002 | - | | 0.6615 | 39250 | 0.0 | - | | 0.6623 | 39300 | 0.0001 | - | | 0.6631 | 39350 | 0.0005 | - | | 0.6640 | 39400 | 0.0001 | - | | 0.6648 | 39450 | 0.0 | - | | 0.6657 | 39500 | 0.0002 | - | | 0.6665 | 39550 | 0.0001 | - | | 0.6674 | 39600 | 0.0004 | - | | 0.6682 | 39650 | 0.0007 | - | | 0.6690 | 39700 | 0.0001 | - | | 0.6699 | 39750 | 0.0001 | - | | 0.6707 | 39800 | 0.0001 | - | | 0.6716 | 39850 | 0.0003 | - | | 0.6724 | 39900 | 0.0003 | - | | 0.6733 | 39950 | 0.0001 | - | | 0.6741 | 40000 | 0.0001 | - | | 0.6749 | 40050 | 0.0003 | - | | 0.6758 | 40100 | 0.0003 | - | | 0.6766 | 40150 | 0.0002 | - | | 0.6775 | 40200 | 0.0 | - | | 0.6783 | 40250 | 0.0 | - | | 0.6791 | 40300 | 0.0002 | - | | 0.6800 | 40350 | 0.0001 | - | | 0.6808 | 40400 | 0.0002 | - | | 0.6817 | 40450 | 0.0003 | - | | 0.6825 | 40500 | 0.0002 | - | | 0.6834 | 40550 | 0.0001 | - | | 0.6842 | 40600 | 0.0002 | - | | 0.6850 | 40650 | 0.0001 | - | | 0.6859 | 40700 | 0.0003 | - | | 0.6867 | 40750 | 0.0009 | - | | 0.6876 | 40800 | 0.0 | - | | 0.6884 | 40850 | 0.0006 | - | | 0.6893 | 40900 | 0.0003 | - | | 0.6901 | 40950 | 0.0003 | - | | 0.6909 | 41000 | 0.0002 | - | | 0.6918 | 41050 | 0.0001 | - | | 0.6926 | 41100 | 0.0005 | - | | 0.6935 | 41150 | 0.0003 | - | | 0.6943 | 41200 | 0.0003 | - | | 0.6952 | 41250 | 0.0003 | - | | 0.6960 | 41300 | 0.0001 | - | | 0.6968 | 41350 | 0.0003 | - | | 0.6977 | 41400 | 0.0001 | - | | 0.6985 | 41450 | 0.0001 | - | | 0.6994 | 41500 | 0.0001 | - | | 0.7002 | 41550 | 0.0 | - | | 0.7011 | 41600 | 0.0001 | - | | 0.7019 | 41650 | 0.0008 | - | | 0.7027 | 41700 | 0.0002 | - | | 0.7036 | 41750 | 0.0 | - | | 0.7044 | 41800 | 0.0001 | - | | 0.7053 | 41850 | 0.0002 | - | | 0.7061 | 41900 | 0.0003 | - | | 0.7070 | 41950 | 0.0001 | - | | 0.7078 | 42000 | 0.0002 | - | | 0.7086 | 42050 | 0.0005 | - | | 0.7095 | 42100 | 0.0003 | - | | 0.7103 | 42150 | 0.0001 | - | | 0.7112 | 42200 | 0.0001 | - | | 0.7120 | 42250 | 0.0001 | - | | 0.7129 | 42300 | 0.0 | - | | 0.7137 | 42350 | 0.0004 | - | | 0.7145 | 42400 | 0.0001 | - | | 0.7154 | 42450 | 0.0002 | - | | 0.7162 | 42500 | 0.0004 | - | | 0.7171 | 42550 | 0.0003 | - | | 0.7179 | 42600 | 0.0002 | - | | 0.7188 | 42650 | 0.0002 | - | | 0.7196 | 42700 | 0.0 | - | | 0.7204 | 42750 | 0.0002 | - | | 0.7213 | 42800 | 0.0003 | - | | 0.7221 | 42850 | 0.0 | - | | 0.7230 | 42900 | 0.0009 | - | | 0.7238 | 42950 | 0.0005 | - | | 0.7246 | 43000 | 0.0001 | - | | 0.7255 | 43050 | 0.0001 | - | | 0.7263 | 43100 | 0.0007 | - | | 0.7272 | 43150 | 0.0001 | - | | 0.7280 | 43200 | 0.0 | - | | 0.7289 | 43250 | 0.0 | - | | 0.7297 | 43300 | 0.0004 | - | | 0.7305 | 43350 | 0.0002 | - | | 0.7314 | 43400 | 0.0002 | - | | 0.7322 | 43450 | 0.0001 | - | | 0.7331 | 43500 | 0.0 | - | | 0.7339 | 43550 | 0.0002 | - | | 0.7348 | 43600 | 0.0004 | - | | 0.7356 | 43650 | 0.0002 | - | | 0.7364 | 43700 | 0.0001 | - | | 0.7373 | 43750 | 0.0 | - | | 0.7381 | 43800 | 0.0003 | - | | 0.7390 | 43850 | 0.0002 | - | | 0.7398 | 43900 | 0.0003 | - | | 0.7407 | 43950 | 0.0002 | - | | 0.7415 | 44000 | 0.0 | - | | 0.7423 | 44050 | 0.0007 | - | | 0.7432 | 44100 | 0.0003 | - | | 0.7440 | 44150 | 0.0002 | - | | 0.7449 | 44200 | 0.0001 | - | | 0.7457 | 44250 | 0.0003 | - | | 0.7466 | 44300 | 0.0001 | - | | 0.7474 | 44350 | 0.0002 | - | | 0.7482 | 44400 | 0.0002 | - | | 0.7491 | 44450 | 0.0001 | - | | 0.7499 | 44500 | 0.0004 | - | | 0.7508 | 44550 | 0.0 | - | | 0.7516 | 44600 | 0.0 | - | | 0.7525 | 44650 | 0.0002 | - | | 0.7533 | 44700 | 0.0 | - | | 0.7541 | 44750 | 0.0001 | - | | 0.7550 | 44800 | 0.0007 | - | | 0.7558 | 44850 | 0.0001 | - | | 0.7567 | 44900 | 0.0 | - | | 0.7575 | 44950 | 0.0005 | - | | 0.7584 | 45000 | 0.0035 | - | | 0.7592 | 45050 | 0.0047 | - | | 0.7600 | 45100 | 0.0043 | - | | 0.7609 | 45150 | 0.0032 | - | | 0.7617 | 45200 | 0.0104 | - | | 0.7626 | 45250 | 0.0116 | - | | 0.7634 | 45300 | 0.0017 | - | | 0.7643 | 45350 | 0.0032 | - | | 0.7651 | 45400 | 0.0078 | - | | 0.7659 | 45450 | 0.0078 | - | | 0.7668 | 45500 | 0.0051 | - | | 0.7676 | 45550 | 0.001 | - | | 0.7685 | 45600 | 0.0017 | - | | 0.7693 | 45650 | 0.0043 | - | | 0.7702 | 45700 | 0.0008 | - | | 0.7710 | 45750 | 0.0004 | - | | 0.7718 | 45800 | 0.0002 | - | | 0.7727 | 45850 | 0.0004 | - | | 0.7735 | 45900 | 0.0007 | - | | 0.7744 | 45950 | 0.0003 | - | | 0.7752 | 46000 | 0.0004 | - | | 0.7760 | 46050 | 0.0002 | - | | 0.7769 | 46100 | 0.0006 | - | | 0.7777 | 46150 | 0.0001 | - | | 0.7786 | 46200 | 0.0002 | - | | 0.7794 | 46250 | 0.0002 | - | | 0.7803 | 46300 | 0.0003 | - | | 0.7811 | 46350 | 0.0002 | - | | 0.7819 | 46400 | 0.0002 | - | | 0.7828 | 46450 | 0.0004 | - | | 0.7836 | 46500 | 0.0003 | - | | 0.7845 | 46550 | 0.0003 | - | | 0.7853 | 46600 | 0.0008 | - | | 0.7862 | 46650 | 0.0002 | - | | 0.7870 | 46700 | 0.0003 | - | | 0.7878 | 46750 | 0.0002 | - | | 0.7887 | 46800 | 0.0002 | - | | 0.7895 | 46850 | 0.0002 | - | | 0.7904 | 46900 | 0.0001 | - | | 0.7912 | 46950 | 0.0005 | - | | 0.7921 | 47000 | 0.0007 | - | | 0.7929 | 47050 | 0.0002 | - | | 0.7937 | 47100 | 0.0013 | - | | 0.7946 | 47150 | 0.0002 | - | | 0.7954 | 47200 | 0.0001 | - | | 0.7963 | 47250 | 0.0008 | - | | 0.7971 | 47300 | 0.0002 | - | | 0.7980 | 47350 | 0.0001 | - | | 0.7988 | 47400 | 0.001 | - | | 0.7996 | 47450 | 0.0001 | - | | 0.8005 | 47500 | 0.0 | - | | 0.8013 | 47550 | 0.0 | - | | 0.8022 | 47600 | 0.0001 | - | | 0.8030 | 47650 | 0.0001 | - | | 0.8039 | 47700 | 0.0 | - | | 0.8047 | 47750 | 0.0 | - | | 0.8055 | 47800 | 0.0003 | - | | 0.8064 | 47850 | 0.001 | - | | 0.8072 | 47900 | 0.0003 | - | | 0.8081 | 47950 | 0.0001 | - | | 0.8089 | 48000 | 0.0002 | - | | 0.8098 | 48050 | 0.0003 | - | | 0.8106 | 48100 | 0.0002 | - | | 0.8114 | 48150 | 0.0003 | - | | 0.8123 | 48200 | 0.0002 | - | | 0.8131 | 48250 | 0.0005 | - | | 0.8140 | 48300 | 0.0001 | - | | 0.8148 | 48350 | 0.0001 | - | | 0.8157 | 48400 | 0.0006 | - | | 0.8165 | 48450 | 0.001 | - | | 0.8173 | 48500 | 0.0003 | - | | 0.8182 | 48550 | 0.0001 | - | | 0.8190 | 48600 | 0.0003 | - | | 0.8199 | 48650 | 0.0002 | - | | 0.8207 | 48700 | 0.0004 | - | | 0.8216 | 48750 | 0.0001 | - | | 0.8224 | 48800 | 0.0003 | - | | 0.8232 | 48850 | 0.0001 | - | | 0.8241 | 48900 | 0.0 | - | | 0.8249 | 48950 | 0.0002 | - | | 0.8258 | 49000 | 0.0002 | - | | 0.8266 | 49050 | 0.0 | - | | 0.8274 | 49100 | 0.0 | - | | 0.8283 | 49150 | 0.0 | - | | 0.8291 | 49200 | 0.0004 | - | | 0.8300 | 49250 | 0.0003 | - | | 0.8308 | 49300 | 0.0003 | - | | 0.8317 | 49350 | 0.0011 | - | | 0.8325 | 49400 | 0.0 | - | | 0.8333 | 49450 | 0.0003 | - | | 0.8342 | 49500 | 0.0002 | - | | 0.8350 | 49550 | 0.0009 | - | | 0.8359 | 49600 | 0.0005 | - | | 0.8367 | 49650 | 0.0009 | - | | 0.8376 | 49700 | 0.0 | - | | 0.8384 | 49750 | 0.0 | - | | 0.8392 | 49800 | 0.0002 | - | | 0.8401 | 49850 | 0.0001 | - | | 0.8409 | 49900 | 0.0002 | - | | 0.8418 | 49950 | 0.0 | - | | 0.8426 | 50000 | 0.0004 | - | | 0.8435 | 50050 | 0.0002 | - | | 0.8443 | 50100 | 0.0002 | - | | 0.8451 | 50150 | 0.0 | - | | 0.8460 | 50200 | 0.0009 | - | | 0.8468 | 50250 | 0.0001 | - | | 0.8477 | 50300 | 0.0001 | - | | 0.8485 | 50350 | 0.0005 | - | | 0.8494 | 50400 | 0.001 | - | | 0.8502 | 50450 | 0.0002 | - | | 0.8510 | 50500 | 0.0002 | - | | 0.8519 | 50550 | 0.0 | - | | 0.8527 | 50600 | 0.0001 | - | | 0.8536 | 50650 | 0.0003 | - | | 0.8544 | 50700 | 0.0001 | - | | 0.8553 | 50750 | 0.0003 | - | | 0.8561 | 50800 | 0.0003 | - | | 0.8569 | 50850 | 0.0002 | - | | 0.8578 | 50900 | 0.0003 | - | | 0.8586 | 50950 | 0.0002 | - | | 0.8595 | 51000 | 0.0002 | - | | 0.8603 | 51050 | 0.0002 | - | | 0.8612 | 51100 | 0.0 | - | | 0.8620 | 51150 | 0.0001 | - | | 0.8628 | 51200 | 0.0001 | - | | 0.8637 | 51250 | 0.0004 | - | | 0.8645 | 51300 | 0.0001 | - | | 0.8654 | 51350 | 0.0002 | - | | 0.8662 | 51400 | 0.0 | - | | 0.8671 | 51450 | 0.0 | - | | 0.8679 | 51500 | 0.0001 | - | | 0.8687 | 51550 | 0.0001 | - | | 0.8696 | 51600 | 0.0001 | - | | 0.8704 | 51650 | 0.0001 | - | | 0.8713 | 51700 | 0.0 | - | | 0.8721 | 51750 | 0.0004 | - | | 0.8730 | 51800 | 0.0005 | - | | 0.8738 | 51850 | 0.0001 | - | | 0.8746 | 51900 | 0.0 | - | | 0.8755 | 51950 | 0.0003 | - | | 0.8763 | 52000 | 0.0004 | - | | 0.8772 | 52050 | 0.0002 | - | | 0.8780 | 52100 | 0.0 | - | | 0.8788 | 52150 | 0.0002 | - | | 0.8797 | 52200 | 0.0 | - | | 0.8805 | 52250 | 0.0003 | - | | 0.8814 | 52300 | 0.0004 | - | | 0.8822 | 52350 | 0.0009 | - | | 0.8831 | 52400 | 0.0003 | - | | 0.8839 | 52450 | 0.0003 | - | | 0.8847 | 52500 | 0.0005 | - | | 0.8856 | 52550 | 0.0001 | - | | 0.8864 | 52600 | 0.0 | - | | 0.8873 | 52650 | 0.0019 | - | | 0.8881 | 52700 | 0.0006 | - | | 0.8890 | 52750 | 0.0001 | - | | 0.8898 | 52800 | 0.004 | - | | 0.8906 | 52850 | 0.0023 | - | | 0.8915 | 52900 | 0.009 | - | | 0.8923 | 52950 | 0.0015 | - | | 0.8932 | 53000 | 0.0093 | - | | 0.8940 | 53050 | 0.0085 | - | | 0.8949 | 53100 | 0.0054 | - | | 0.8957 | 53150 | 0.003 | - | | 0.8965 | 53200 | 0.0009 | - | | 0.8974 | 53250 | 0.0021 | - | | 0.8982 | 53300 | 0.0008 | - | | 0.8991 | 53350 | 0.0017 | - | | 0.8999 | 53400 | 0.0022 | - | | 0.9008 | 53450 | 0.0005 | - | | 0.9016 | 53500 | 0.0004 | - | | 0.9024 | 53550 | 0.0022 | - | | 0.9033 | 53600 | 0.0017 | - | | 0.9041 | 53650 | 0.0001 | - | | 0.9050 | 53700 | 0.0024 | - | | 0.9058 | 53750 | 0.0003 | - | | 0.9067 | 53800 | 0.0024 | - | | 0.9075 | 53850 | 0.0004 | - | | 0.9083 | 53900 | 0.0003 | - | | 0.9092 | 53950 | 0.0002 | - | | 0.9100 | 54000 | 0.0004 | - | | 0.9109 | 54050 | 0.0002 | - | | 0.9117 | 54100 | 0.0004 | - | | 0.9126 | 54150 | 0.0003 | - | | 0.9134 | 54200 | 0.0002 | - | | 0.9142 | 54250 | 0.0 | - | | 0.9151 | 54300 | 0.0002 | - | | 0.9159 | 54350 | 0.0001 | - | | 0.9168 | 54400 | 0.0013 | - | | 0.9176 | 54450 | 0.0002 | - | | 0.9185 | 54500 | 0.0 | - | | 0.9193 | 54550 | 0.0014 | - | | 0.9201 | 54600 | 0.0002 | - | | 0.9210 | 54650 | 0.0009 | - | | 0.9218 | 54700 | 0.0005 | - | | 0.9227 | 54750 | 0.0002 | - | | 0.9235 | 54800 | 0.0003 | - | | 0.9243 | 54850 | 0.004 | - | | 0.9252 | 54900 | 0.0012 | - | | 0.9260 | 54950 | 0.0011 | - | | 0.9269 | 55000 | 0.0004 | - | | 0.9277 | 55050 | 0.0016 | - | | 0.9286 | 55100 | 0.0014 | - | | 0.9294 | 55150 | 0.0002 | - | | 0.9302 | 55200 | 0.0001 | - | | 0.9311 | 55250 | 0.0003 | - | | 0.9319 | 55300 | 0.0001 | - | | 0.9328 | 55350 | 0.0003 | - | | 0.9336 | 55400 | 0.0011 | - | | 0.9345 | 55450 | 0.0 | - | | 0.9353 | 55500 | 0.0003 | - | | 0.9361 | 55550 | 0.0003 | - | | 0.9370 | 55600 | 0.0003 | - | | 0.9378 | 55650 | 0.0006 | - | | 0.9387 | 55700 | 0.0001 | - | | 0.9395 | 55750 | 0.0005 | - | | 0.9404 | 55800 | 0.0004 | - | | 0.9412 | 55850 | 0.0002 | - | | 0.9420 | 55900 | 0.0001 | - | | 0.9429 | 55950 | 0.0003 | - | | 0.9437 | 56000 | 0.0 | - | | 0.9446 | 56050 | 0.0 | - | | 0.9454 | 56100 | 0.0002 | - | | 0.9463 | 56150 | 0.0001 | - | | 0.9471 | 56200 | 0.0011 | - | | 0.9479 | 56250 | 0.0003 | - | | 0.9488 | 56300 | 0.0002 | - | | 0.9496 | 56350 | 0.0001 | - | | 0.9505 | 56400 | 0.0003 | - | | 0.9513 | 56450 | 0.0003 | - | | 0.9522 | 56500 | 0.0003 | - | | 0.9530 | 56550 | 0.0003 | - | | 0.9538 | 56600 | 0.0003 | - | | 0.9547 | 56650 | 0.0 | - | | 0.9555 | 56700 | 0.0031 | - | | 0.9564 | 56750 | 0.0007 | - | | 0.9572 | 56800 | 0.0042 | - | | 0.9581 | 56850 | 0.0013 | - | | 0.9589 | 56900 | 0.0046 | - | | 0.9597 | 56950 | 0.0047 | - | | 0.9606 | 57000 | 0.0001 | - | | 0.9614 | 57050 | 0.0002 | - | | 0.9623 | 57100 | 0.0048 | - | | 0.9631 | 57150 | 0.0054 | - | | 0.9640 | 57200 | 0.0021 | - | | 0.9648 | 57250 | 0.0025 | - | | 0.9656 | 57300 | 0.0024 | - | | 0.9665 | 57350 | 0.0021 | - | | 0.9673 | 57400 | 0.0011 | - | | 0.9682 | 57450 | 0.0017 | - | | 0.9690 | 57500 | 0.0021 | - | | 0.9699 | 57550 | 0.0013 | - | | 0.9707 | 57600 | 0.0017 | - | | 0.9715 | 57650 | 0.0027 | - | | 0.9724 | 57700 | 0.0014 | - | | 0.9732 | 57750 | 0.0021 | - | | 0.9741 | 57800 | 0.0026 | - | | 0.9749 | 57850 | 0.0003 | - | | 0.9757 | 57900 | 0.0022 | - | | 0.9766 | 57950 | 0.0016 | - | | 0.9774 | 58000 | 0.0015 | - | | 0.9783 | 58050 | 0.0012 | - | | 0.9791 | 58100 | 0.0015 | - | | 0.9800 | 58150 | 0.001 | - | | 0.9808 | 58200 | 0.0003 | - | | 0.9816 | 58250 | 0.0007 | - | | 0.9825 | 58300 | 0.0007 | - | | 0.9833 | 58350 | 0.0003 | - | | 0.9842 | 58400 | 0.0001 | - | | 0.9850 | 58450 | 0.0001 | - | | 0.9859 | 58500 | 0.0002 | - | | 0.9867 | 58550 | 0.0 | - | | 0.9875 | 58600 | 0.0001 | - | | 0.9884 | 58650 | 0.0002 | - | | 0.9892 | 58700 | 0.0002 | - | | 0.9901 | 58750 | 0.0 | - | | 0.9909 | 58800 | 0.0001 | - | | 0.9918 | 58850 | 0.0006 | - | | 0.9926 | 58900 | 0.0002 | - | | 0.9934 | 58950 | 0.0004 | - | | 0.9943 | 59000 | 0.0002 | - | | 0.9951 | 59050 | 0.0003 | - | | 0.9960 | 59100 | 0.0004 | - | | 0.9968 | 59150 | 0.0002 | - | | 0.9977 | 59200 | 0.0002 | - | | 0.9985 | 59250 | 0.0011 | - | | 0.9993 | 59300 | 0.0001 | - | | 1.0 | 59339 | - | 0.1346 | | 1.0002 | 59350 | 0.0001 | - | | 1.0010 | 59400 | 0.0 | - | | 1.0019 | 59450 | 0.0004 | - | | 1.0027 | 59500 | 0.0 | - | | 1.0036 | 59550 | 0.0002 | - | | 1.0044 | 59600 | 0.0001 | - | | 1.0052 | 59650 | 0.0004 | - | | 1.0061 | 59700 | 0.0001 | - | | 1.0069 | 59750 | 0.0002 | - | | 1.0078 | 59800 | 0.0002 | - | | 1.0086 | 59850 | 0.0 | - | | 1.0095 | 59900 | 0.0002 | - | | 1.0103 | 59950 | 0.0001 | - | | 1.0111 | 60000 | 0.0003 | - | | 1.0120 | 60050 | 0.0002 | - | | 1.0128 | 60100 | 0.0002 | - | | 1.0137 | 60150 | 0.0002 | - | | 1.0145 | 60200 | 0.0002 | - | | 1.0154 | 60250 | 0.0002 | - | | 1.0162 | 60300 | 0.0001 | - | | 1.0170 | 60350 | 0.0001 | - | | 1.0179 | 60400 | 0.0001 | - | | 1.0187 | 60450 | 0.0002 | - | | 1.0196 | 60500 | 0.0001 | - | | 1.0204 | 60550 | 0.0007 | - | | 1.0213 | 60600 | 0.0012 | - | | 1.0221 | 60650 | 0.0002 | - | | 1.0229 | 60700 | 0.0002 | - | | 1.0238 | 60750 | 0.0003 | - | | 1.0246 | 60800 | 0.0003 | - | | 1.0255 | 60850 | 0.0001 | - | | 1.0263 | 60900 | 0.0007 | - | | 1.0271 | 60950 | 0.0 | - | | 1.0280 | 61000 | 0.0001 | - | | 1.0288 | 61050 | 0.0003 | - | | 1.0297 | 61100 | 0.0002 | - | | 1.0305 | 61150 | 0.0001 | - | | 1.0314 | 61200 | 0.0004 | - | | 1.0322 | 61250 | 0.0002 | - | | 1.0330 | 61300 | 0.0001 | - | | 1.0339 | 61350 | 0.0001 | - | | 1.0347 | 61400 | 0.0004 | - | | 1.0356 | 61450 | 0.0 | - | | 1.0364 | 61500 | 0.0003 | - | | 1.0373 | 61550 | 0.0002 | - | | 1.0381 | 61600 | 0.0005 | - | | 1.0389 | 61650 | 0.0001 | - | | 1.0398 | 61700 | 0.0002 | - | | 1.0406 | 61750 | 0.0004 | - | | 1.0415 | 61800 | 0.0 | - | | 1.0423 | 61850 | 0.0002 | - | | 1.0432 | 61900 | 0.0004 | - | | 1.0440 | 61950 | 0.0002 | - | | 1.0448 | 62000 | 0.0003 | - | | 1.0457 | 62050 | 0.0001 | - | | 1.0465 | 62100 | 0.0001 | - | | 1.0474 | 62150 | 0.0003 | - | | 1.0482 | 62200 | 0.0002 | - | | 1.0491 | 62250 | 0.0002 | - | | 1.0499 | 62300 | 0.0004 | - | | 1.0507 | 62350 | 0.0002 | - | | 1.0516 | 62400 | 0.0007 | - | | 1.0524 | 62450 | 0.0002 | - | | 1.0533 | 62500 | 0.0 | - | | 1.0541 | 62550 | 0.0002 | - | | 1.0550 | 62600 | 0.0001 | - | | 1.0558 | 62650 | 0.0002 | - | | 1.0566 | 62700 | 0.0002 | - | | 1.0575 | 62750 | 0.0 | - | | 1.0583 | 62800 | 0.0001 | - | | 1.0592 | 62850 | 0.0001 | - | | 1.0600 | 62900 | 0.0001 | - | | 1.0609 | 62950 | 0.0009 | - | | 1.0617 | 63000 | 0.0001 | - | | 1.0625 | 63050 | 0.0003 | - | | 1.0634 | 63100 | 0.0006 | - | | 1.0642 | 63150 | 0.0003 | - | | 1.0651 | 63200 | 0.0 | - | | 1.0659 | 63250 | 0.0002 | - | | 1.0668 | 63300 | 0.0002 | - | | 1.0676 | 63350 | 0.0009 | - | | 1.0684 | 63400 | 0.0003 | - | | 1.0693 | 63450 | 0.0004 | - | | 1.0701 | 63500 | 0.0002 | - | | 1.0710 | 63550 | 0.0001 | - | | 1.0718 | 63600 | 0.0003 | - | | 1.0727 | 63650 | 0.0004 | - | | 1.0735 | 63700 | 0.0002 | - | | 1.0743 | 63750 | 0.0001 | - | | 1.0752 | 63800 | 0.0002 | - | | 1.0760 | 63850 | 0.0001 | - | | 1.0769 | 63900 | 0.0002 | - | | 1.0777 | 63950 | 0.0002 | - | | 1.0785 | 64000 | 0.0001 | - | | 1.0794 | 64050 | 0.0 | - | | 1.0802 | 64100 | 0.0005 | - | | 1.0811 | 64150 | 0.0002 | - | | 1.0819 | 64200 | 0.0 | - | | 1.0828 | 64250 | 0.0002 | - | | 1.0836 | 64300 | 0.0002 | - | | 1.0844 | 64350 | 0.0002 | - | | 1.0853 | 64400 | 0.0002 | - | | 1.0861 | 64450 | 0.0002 | - | | 1.0870 | 64500 | 0.0008 | - | | 1.0878 | 64550 | 0.0001 | - | | 1.0887 | 64600 | 0.0001 | - | | 1.0895 | 64650 | 0.0 | - | | 1.0903 | 64700 | 0.0001 | - | | 1.0912 | 64750 | 0.0 | - | | 1.0920 | 64800 | 0.0 | - | | 1.0929 | 64850 | 0.0 | - | | 1.0937 | 64900 | 0.0003 | - | | 1.0946 | 64950 | 0.0001 | - | | 1.0954 | 65000 | 0.0 | - | | 1.0962 | 65050 | 0.0004 | - | | 1.0971 | 65100 | 0.0001 | - | | 1.0979 | 65150 | 0.0003 | - | | 1.0988 | 65200 | 0.0001 | - | | 1.0996 | 65250 | 0.0013 | - | | 1.1005 | 65300 | 0.0003 | - | | 1.1013 | 65350 | 0.0 | - | | 1.1021 | 65400 | 0.0001 | - | | 1.1030 | 65450 | 0.0008 | - | | 1.1038 | 65500 | 0.0003 | - | | 1.1047 | 65550 | 0.0001 | - | | 1.1055 | 65600 | 0.0001 | - | | 1.1064 | 65650 | 0.0 | - | | 1.1072 | 65700 | 0.0001 | - | | 1.1080 | 65750 | 0.0003 | - | | 1.1089 | 65800 | 0.0003 | - | | 1.1097 | 65850 | 0.0 | - | | 1.1106 | 65900 | 0.0012 | - | | 1.1114 | 65950 | 0.0024 | - | | 1.1123 | 66000 | 0.0036 | - | | 1.1131 | 66050 | 0.0056 | - | | 1.1139 | 66100 | 0.0038 | - | | 1.1148 | 66150 | 0.0057 | - | | 1.1156 | 66200 | 0.0009 | - | | 1.1165 | 66250 | 0.0026 | - | | 1.1173 | 66300 | 0.0016 | - | | 1.1182 | 66350 | 0.0007 | - | | 1.1190 | 66400 | 0.0017 | - | | 1.1198 | 66450 | 0.0008 | - | | 1.1207 | 66500 | 0.0003 | - | | 1.1215 | 66550 | 0.0001 | - | | 1.1224 | 66600 | 0.0001 | - | | 1.1232 | 66650 | 0.0032 | - | | 1.1240 | 66700 | 0.0021 | - | | 1.1249 | 66750 | 0.0017 | - | | 1.1257 | 66800 | 0.0011 | - | | 1.1266 | 66850 | 0.0004 | - | | 1.1274 | 66900 | 0.0011 | - | | 1.1283 | 66950 | 0.0001 | - | | 1.1291 | 67000 | 0.0016 | - | | 1.1299 | 67050 | 0.0011 | - | | 1.1308 | 67100 | 0.0016 | - | | 1.1316 | 67150 | 0.0008 | - | | 1.1325 | 67200 | 0.0004 | - | | 1.1333 | 67250 | 0.0001 | - | | 1.1342 | 67300 | 0.0004 | - | | 1.1350 | 67350 | 0.0005 | - | | 1.1358 | 67400 | 0.0006 | - | | 1.1367 | 67450 | 0.0009 | - | | 1.1375 | 67500 | 0.001 | - | | 1.1384 | 67550 | 0.0006 | - | | 1.1392 | 67600 | 0.0002 | - | | 1.1401 | 67650 | 0.0002 | - | | 1.1409 | 67700 | 0.0 | - | | 1.1417 | 67750 | 0.0005 | - | | 1.1426 | 67800 | 0.0004 | - | | 1.1434 | 67850 | 0.0003 | - | | 1.1443 | 67900 | 0.0013 | - | | 1.1451 | 67950 | 0.0002 | - | | 1.1460 | 68000 | 0.0001 | - | | 1.1468 | 68050 | 0.0004 | - | | 1.1476 | 68100 | 0.0001 | - | | 1.1485 | 68150 | 0.0004 | - | | 1.1493 | 68200 | 0.0008 | - | | 1.1502 | 68250 | 0.0 | - | | 1.1510 | 68300 | 0.0005 | - | | 1.1519 | 68350 | 0.0002 | - | | 1.1527 | 68400 | 0.0002 | - | | 1.1535 | 68450 | 0.0001 | - | | 1.1544 | 68500 | 0.0001 | - | | 1.1552 | 68550 | 0.0004 | - | | 1.1561 | 68600 | 0.0001 | - | | 1.1569 | 68650 | 0.0002 | - | | 1.1578 | 68700 | 0.0 | - | | 1.1586 | 68750 | 0.0001 | - | | 1.1594 | 68800 | 0.0 | - | | 1.1603 | 68850 | 0.0002 | - | | 1.1611 | 68900 | 0.0 | - | | 1.1620 | 68950 | 0.0005 | - | | 1.1628 | 69000 | 0.0 | - | | 1.1637 | 69050 | 0.0002 | - | | 1.1645 | 69100 | 0.0003 | - | | 1.1653 | 69150 | 0.0003 | - | | 1.1662 | 69200 | 0.0002 | - | | 1.1670 | 69250 | 0.0011 | - | | 1.1679 | 69300 | 0.0001 | - | | 1.1687 | 69350 | 0.0 | - | | 1.1696 | 69400 | 0.0 | - | | 1.1704 | 69450 | 0.0003 | - | | 1.1712 | 69500 | 0.0004 | - | | 1.1721 | 69550 | 0.0001 | - | | 1.1729 | 69600 | 0.0001 | - | | 1.1738 | 69650 | 0.0 | - | | 1.1746 | 69700 | 0.0003 | - | | 1.1754 | 69750 | 0.0004 | - | | 1.1763 | 69800 | 0.0001 | - | | 1.1771 | 69850 | 0.0005 | - | | 1.1780 | 69900 | 0.0001 | - | | 1.1788 | 69950 | 0.0004 | - | | 1.1797 | 70000 | 0.0001 | - | | 1.1805 | 70050 | 0.0003 | - | | 1.1813 | 70100 | 0.0011 | - | | 1.1822 | 70150 | 0.0003 | - | | 1.1830 | 70200 | 0.0003 | - | | 1.1839 | 70250 | 0.0 | - | | 1.1847 | 70300 | 0.0003 | - | | 1.1856 | 70350 | 0.0001 | - | | 1.1864 | 70400 | 0.0002 | - | | 1.1872 | 70450 | 0.0001 | - | | 1.1881 | 70500 | 0.0002 | - | | 1.1889 | 70550 | 0.0001 | - | | 1.1898 | 70600 | 0.0008 | - | | 1.1906 | 70650 | 0.0004 | - | | 1.1915 | 70700 | 0.0004 | - | | 1.1923 | 70750 | 0.0002 | - | | 1.1931 | 70800 | 0.0002 | - | | 1.1940 | 70850 | 0.0002 | - | | 1.1948 | 70900 | 0.0005 | - | | 1.1957 | 70950 | 0.0008 | - | | 1.1965 | 71000 | 0.0001 | - | | 1.1974 | 71050 | 0.0003 | - | | 1.1982 | 71100 | 0.0002 | - | | 1.1990 | 71150 | 0.0 | - | | 1.1999 | 71200 | 0.0002 | - | | 1.2007 | 71250 | 0.0 | - | | 1.2016 | 71300 | 0.0009 | - | | 1.2024 | 71350 | 0.0003 | - | | 1.2033 | 71400 | 0.001 | - | | 1.2041 | 71450 | 0.0011 | - | | 1.2049 | 71500 | 0.0 | - | | 1.2058 | 71550 | 0.0 | - | | 1.2066 | 71600 | 0.0 | - | | 1.2075 | 71650 | 0.0001 | - | | 1.2083 | 71700 | 0.0008 | - | | 1.2092 | 71750 | 0.0002 | - | | 1.2100 | 71800 | 0.0003 | - | | 1.2108 | 71850 | 0.0009 | - | | 1.2117 | 71900 | 0.0001 | - | | 1.2125 | 71950 | 0.0001 | - | | 1.2134 | 72000 | 0.0 | - | | 1.2142 | 72050 | 0.0001 | - | | 1.2151 | 72100 | 0.0001 | - | | 1.2159 | 72150 | 0.0 | - | | 1.2167 | 72200 | 0.0 | - | | 1.2176 | 72250 | 0.0001 | - | | 1.2184 | 72300 | 0.0004 | - | | 1.2193 | 72350 | 0.0003 | - | | 1.2201 | 72400 | 0.0001 | - | | 1.2210 | 72450 | 0.0001 | - | | 1.2218 | 72500 | 0.0001 | - | | 1.2226 | 72550 | 0.0001 | - | | 1.2235 | 72600 | 0.0 | - | | 1.2243 | 72650 | 0.0003 | - | | 1.2252 | 72700 | 0.0001 | - | | 1.2260 | 72750 | 0.0001 | - | | 1.2268 | 72800 | 0.0001 | - | | 1.2277 | 72850 | 0.0008 | - | | 1.2285 | 72900 | 0.0001 | - | | 1.2294 | 72950 | 0.0001 | - | | 1.2302 | 73000 | 0.0001 | - | | 1.2311 | 73050 | 0.0 | - | | 1.2319 | 73100 | 0.0003 | - | | 1.2327 | 73150 | 0.0004 | - | | 1.2336 | 73200 | 0.0001 | - | | 1.2344 | 73250 | 0.0001 | - | | 1.2353 | 73300 | 0.0002 | - | | 1.2361 | 73350 | 0.0004 | - | | 1.2370 | 73400 | 0.0001 | - | | 1.2378 | 73450 | 0.0003 | - | | 1.2386 | 73500 | 0.0002 | - | | 1.2395 | 73550 | 0.0002 | - | | 1.2403 | 73600 | 0.0 | - | | 1.2412 | 73650 | 0.0001 | - | | 1.2420 | 73700 | 0.0003 | - | | 1.2429 | 73750 | 0.0 | - | | 1.2437 | 73800 | 0.0 | - | | 1.2445 | 73850 | 0.001 | - | | 1.2454 | 73900 | 0.0 | - | | 1.2462 | 73950 | 0.0008 | - | | 1.2471 | 74000 | 0.0002 | - | | 1.2479 | 74050 | 0.0002 | - | | 1.2488 | 74100 | 0.0003 | - | | 1.2496 | 74150 | 0.0 | - | | 1.2504 | 74200 | 0.0003 | - | | 1.2513 | 74250 | 0.0002 | - | | 1.2521 | 74300 | 0.0001 | - | | 1.2530 | 74350 | 0.0 | - | | 1.2538 | 74400 | 0.0002 | - | | 1.2547 | 74450 | 0.0002 | - | | 1.2555 | 74500 | 0.0001 | - | | 1.2563 | 74550 | 0.0001 | - | | 1.2572 | 74600 | 0.0001 | - | | 1.2580 | 74650 | 0.0001 | - | | 1.2589 | 74700 | 0.0002 | - | | 1.2597 | 74750 | 0.0002 | - | | 1.2606 | 74800 | 0.0009 | - | | 1.2614 | 74850 | 0.0004 | - | | 1.2622 | 74900 | 0.0001 | - | | 1.2631 | 74950 | 0.0001 | - | | 1.2639 | 75000 | 0.0001 | - | | 1.2648 | 75050 | 0.0004 | - | | 1.2656 | 75100 | 0.0002 | - | | 1.2665 | 75150 | 0.0001 | - | | 1.2673 | 75200 | 0.0001 | - | | 1.2681 | 75250 | 0.0001 | - | | 1.2690 | 75300 | 0.0002 | - | | 1.2698 | 75350 | 0.0003 | - | | 1.2707 | 75400 | 0.0001 | - | | 1.2715 | 75450 | 0.0002 | - | | 1.2724 | 75500 | 0.0001 | - | | 1.2732 | 75550 | 0.0002 | - | | 1.2740 | 75600 | 0.0014 | - | | 1.2749 | 75650 | 0.0001 | - | | 1.2757 | 75700 | 0.0002 | - | | 1.2766 | 75750 | 0.0002 | - | | 1.2774 | 75800 | 0.0003 | - | | 1.2782 | 75850 | 0.0 | - | | 1.2791 | 75900 | 0.0 | - | | 1.2799 | 75950 | 0.0002 | - | | 1.2808 | 76000 | 0.0001 | - | | 1.2816 | 76050 | 0.0001 | - | | 1.2825 | 76100 | 0.0009 | - | | 1.2833 | 76150 | 0.0008 | - | | 1.2841 | 76200 | 0.0001 | - | | 1.2850 | 76250 | 0.0002 | - | | 1.2858 | 76300 | 0.0002 | - | | 1.2867 | 76350 | 0.001 | - | | 1.2875 | 76400 | 0.0009 | - | | 1.2884 | 76450 | 0.0002 | - | | 1.2892 | 76500 | 0.0 | - | | 1.2900 | 76550 | 0.0017 | - | | 1.2909 | 76600 | 0.0002 | - | | 1.2917 | 76650 | 0.0008 | - | | 1.2926 | 76700 | 0.0005 | - | | 1.2934 | 76750 | 0.0001 | - | | 1.2943 | 76800 | 0.0003 | - | | 1.2951 | 76850 | 0.0002 | - | | 1.2959 | 76900 | 0.0001 | - | | 1.2968 | 76950 | 0.0001 | - | | 1.2976 | 77000 | 0.0004 | - | | 1.2985 | 77050 | 0.0 | - | | 1.2993 | 77100 | 0.0003 | - | | 1.3002 | 77150 | 0.0001 | - | | 1.3010 | 77200 | 0.0002 | - | | 1.3018 | 77250 | 0.0002 | - | | 1.3027 | 77300 | 0.0002 | - | | 1.3035 | 77350 | 0.0002 | - | | 1.3044 | 77400 | 0.0004 | - | | 1.3052 | 77450 | 0.0009 | - | | 1.3061 | 77500 | 0.0 | - | | 1.3069 | 77550 | 0.0003 | - | | 1.3077 | 77600 | 0.0002 | - | | 1.3086 | 77650 | 0.0 | - | | 1.3094 | 77700 | 0.0003 | - | | 1.3103 | 77750 | 0.0 | - | | 1.3111 | 77800 | 0.0 | - | | 1.3120 | 77850 | 0.0009 | - | | 1.3128 | 77900 | 0.0 | - | | 1.3136 | 77950 | 0.0008 | - | | 1.3145 | 78000 | 0.0001 | - | | 1.3153 | 78050 | 0.0004 | - | | 1.3162 | 78100 | 0.0002 | - | | 1.3170 | 78150 | 0.0002 | - | | 1.3179 | 78200 | 0.0 | - | | 1.3187 | 78250 | 0.0008 | - | | 1.3195 | 78300 | 0.0003 | - | | 1.3204 | 78350 | 0.0002 | - | | 1.3212 | 78400 | 0.0 | - | | 1.3221 | 78450 | 0.0002 | - | | 1.3229 | 78500 | 0.0 | - | | 1.3237 | 78550 | 0.0 | - | | 1.3246 | 78600 | 0.0002 | - | | 1.3254 | 78650 | 0.0 | - | | 1.3263 | 78700 | 0.0002 | - | | 1.3271 | 78750 | 0.0003 | - | | 1.3280 | 78800 | 0.0002 | - | | 1.3288 | 78850 | 0.0002 | - | | 1.3296 | 78900 | 0.0001 | - | | 1.3305 | 78950 | 0.0 | - | | 1.3313 | 79000 | 0.0002 | - | | 1.3322 | 79050 | 0.0001 | - | | 1.3330 | 79100 | 0.0002 | - | | 1.3339 | 79150 | 0.0001 | - | | 1.3347 | 79200 | 0.0003 | - | | 1.3355 | 79250 | 0.0002 | - | | 1.3364 | 79300 | 0.0001 | - | | 1.3372 | 79350 | 0.0 | - | | 1.3381 | 79400 | 0.0007 | - | | 1.3389 | 79450 | 0.0002 | - | | 1.3398 | 79500 | 0.0002 | - | | 1.3406 | 79550 | 0.0003 | - | | 1.3414 | 79600 | 0.0002 | - | | 1.3423 | 79650 | 0.0 | - | | 1.3431 | 79700 | 0.0004 | - | | 1.3440 | 79750 | 0.0009 | - | | 1.3448 | 79800 | 0.0018 | - | | 1.3457 | 79850 | 0.0002 | - | | 1.3465 | 79900 | 0.0003 | - | | 1.3473 | 79950 | 0.0001 | - | | 1.3482 | 80000 | 0.0005 | - | | 1.3490 | 80050 | 0.0003 | - | | 1.3499 | 80100 | 0.0002 | - | | 1.3507 | 80150 | 0.0001 | - | | 1.3516 | 80200 | 0.0002 | - | | 1.3524 | 80250 | 0.0003 | - | | 1.3532 | 80300 | 0.0004 | - | | 1.3541 | 80350 | 0.0002 | - | | 1.3549 | 80400 | 0.0 | - | | 1.3558 | 80450 | 0.0002 | - | | 1.3566 | 80500 | 0.0 | - | | 1.3575 | 80550 | 0.0 | - | | 1.3583 | 80600 | 0.0002 | - | | 1.3591 | 80650 | 0.0009 | - | | 1.3600 | 80700 | 0.0002 | - | | 1.3608 | 80750 | 0.0001 | - | | 1.3617 | 80800 | 0.0001 | - | | 1.3625 | 80850 | 0.0006 | - | | 1.3634 | 80900 | 0.0002 | - | | 1.3642 | 80950 | 0.0002 | - | | 1.3650 | 81000 | 0.0001 | - | | 1.3659 | 81050 | 0.0 | - | | 1.3667 | 81100 | 0.0007 | - | | 1.3676 | 81150 | 0.0001 | - | | 1.3684 | 81200 | 0.0003 | - | | 1.3693 | 81250 | 0.0001 | - | | 1.3701 | 81300 | 0.0 | - | | 1.3709 | 81350 | 0.0011 | - | | 1.3718 | 81400 | 0.0004 | - | | 1.3726 | 81450 | 0.0002 | - | | 1.3735 | 81500 | 0.0003 | - | | 1.3743 | 81550 | 0.0002 | - | | 1.3751 | 81600 | 0.0 | - | | 1.3760 | 81650 | 0.0001 | - | ### Framework Versions - Python: 3.10.14 - SetFit: 1.1.0 - Sentence Transformers: 3.1.1 - Transformers: 4.45.1 - PyTorch: 2.4.1+cu121 - Datasets: 3.0.1 - Tokenizers: 0.20.0 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
phongtintruong/Gwen2.5-Coder-3B-Instruct-5th
phongtintruong
"2025-04-09T04:00:04Z"
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-04-09T03:57:21Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
SaiprasadP/wav2vec2
SaiprasadP
"2024-07-24T10:46:00Z"
7
0
transformers
[ "transformers", "tf", "safetensors", "wav2vec2", "audio-classification", "generated_from_keras_callback", "base_model:facebook/wav2vec2-base-960h", "base_model:finetune:facebook/wav2vec2-base-960h", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
"2024-06-20T15:54:11Z"
--- base_model: facebook/wav2vec2-base-960h license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: wav2vec2 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2 This model is a fine-tuned version of [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.42.0.dev0 - TensorFlow 2.15.0 - Datasets 2.20.0 - Tokenizers 0.19.1
MayBashendy/Arabic_FineTuningAraBERT_AugV4_k3_task5_organization_fold0
MayBashendy
"2024-11-17T06:58:55Z"
182
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:aubmindlab/bert-base-arabertv02", "base_model:finetune:aubmindlab/bert-base-arabertv02", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-11-17T06:55:52Z"
--- library_name: transformers base_model: aubmindlab/bert-base-arabertv02 tags: - generated_from_trainer model-index: - name: Arabic_FineTuningAraBERT_AugV4_k3_task5_organization_fold0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Arabic_FineTuningAraBERT_AugV4_k3_task5_organization_fold0 This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7382 - Qwk: 0.7246 - Mse: 0.7382 - Rmse: 0.8592 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse | |:-------------:|:------:|:----:|:---------------:|:------:|:------:|:------:| | No log | 0.0513 | 2 | 1.6798 | 0.1292 | 1.6798 | 1.2961 | | No log | 0.1026 | 4 | 0.8502 | 0.3929 | 0.8502 | 0.9221 | | No log | 0.1538 | 6 | 1.0036 | 0.2748 | 1.0036 | 1.0018 | | No log | 0.2051 | 8 | 1.0074 | 0.3485 | 1.0074 | 1.0037 | | No log | 0.2564 | 10 | 1.3085 | 0.3711 | 1.3085 | 1.1439 | | No log | 0.3077 | 12 | 1.6674 | 0.2661 | 1.6674 | 1.2913 | | No log | 0.3590 | 14 | 1.5436 | 0.3810 | 1.5436 | 1.2424 | | No log | 0.4103 | 16 | 1.3116 | 0.4329 | 1.3116 | 1.1453 | | No log | 0.4615 | 18 | 1.1551 | 0.5248 | 1.1551 | 1.0748 | | No log | 0.5128 | 20 | 1.0202 | 0.5078 | 1.0202 | 1.0100 | | No log | 0.5641 | 22 | 1.0144 | 0.5106 | 1.0144 | 1.0072 | | No log | 0.6154 | 24 | 1.0164 | 0.5833 | 1.0164 | 1.0082 | | No log | 0.6667 | 26 | 1.3514 | 0.5059 | 1.3514 | 1.1625 | | No log | 0.7179 | 28 | 1.6571 | 0.2924 | 1.6571 | 1.2873 | | No log | 0.7692 | 30 | 1.7037 | 0.3669 | 1.7037 | 1.3053 | | No log | 0.8205 | 32 | 1.5695 | 0.4259 | 1.5695 | 1.2528 | | No log | 0.8718 | 34 | 1.2626 | 0.5059 | 1.2626 | 1.1237 | | No log | 0.9231 | 36 | 0.9242 | 0.4892 | 0.9242 | 0.9613 | | No log | 0.9744 | 38 | 0.7772 | 0.6122 | 0.7772 | 0.8816 | | No log | 1.0256 | 40 | 0.9318 | 0.5777 | 0.9318 | 0.9653 | | No log | 1.0769 | 42 | 1.1560 | 0.6440 | 1.1560 | 1.0752 | | No log | 1.1282 | 44 | 0.8477 | 0.6472 | 0.8477 | 0.9207 | | No log | 1.1795 | 46 | 0.6152 | 0.7526 | 0.6152 | 0.7844 | | No log | 1.2308 | 48 | 0.5471 | 0.7783 | 0.5471 | 0.7397 | | No log | 1.2821 | 50 | 0.6035 | 0.7746 | 0.6035 | 0.7769 | | No log | 1.3333 | 52 | 0.8874 | 0.6641 | 0.8874 | 0.9420 | | No log | 1.3846 | 54 | 0.9153 | 0.6259 | 0.9153 | 0.9567 | | No log | 1.4359 | 56 | 0.8796 | 0.6798 | 0.8796 | 0.9379 | | No log | 1.4872 | 58 | 0.7511 | 0.7890 | 0.7511 | 0.8667 | | No log | 1.5385 | 60 | 0.7170 | 0.7890 | 0.7170 | 0.8468 | | No log | 1.5897 | 62 | 0.7078 | 0.7927 | 0.7078 | 0.8413 | | No log | 1.6410 | 64 | 0.8194 | 0.7213 | 0.8194 | 0.9052 | | No log | 1.6923 | 66 | 0.9875 | 0.6135 | 0.9875 | 0.9937 | | No log | 1.7436 | 68 | 1.0117 | 0.6558 | 1.0117 | 1.0058 | | No log | 1.7949 | 70 | 0.8916 | 0.6975 | 0.8916 | 0.9442 | | No log | 1.8462 | 72 | 0.7945 | 0.7405 | 0.7945 | 0.8913 | | No log | 1.8974 | 74 | 0.7060 | 0.7405 | 0.7060 | 0.8402 | | No log | 1.9487 | 76 | 0.5151 | 0.7477 | 0.5151 | 0.7177 | | No log | 2.0 | 78 | 0.5055 | 0.7229 | 0.5055 | 0.7110 | | No log | 2.0513 | 80 | 0.7032 | 0.6840 | 0.7032 | 0.8386 | | No log | 2.1026 | 82 | 0.8792 | 0.7028 | 0.8792 | 0.9376 | | No log | 2.1538 | 84 | 0.8936 | 0.7028 | 0.8936 | 0.9453 | | No log | 2.2051 | 86 | 0.7084 | 0.7313 | 0.7084 | 0.8417 | | No log | 2.2564 | 88 | 0.5150 | 0.7059 | 0.5150 | 0.7176 | | No log | 2.3077 | 90 | 0.4530 | 0.7303 | 0.4530 | 0.6731 | | No log | 2.3590 | 92 | 0.5585 | 0.7662 | 0.5585 | 0.7473 | | No log | 2.4103 | 94 | 0.6872 | 0.7819 | 0.6872 | 0.8290 | | No log | 2.4615 | 96 | 0.9243 | 0.7218 | 0.9243 | 0.9614 | | No log | 2.5128 | 98 | 0.9183 | 0.7218 | 0.9183 | 0.9583 | | No log | 2.5641 | 100 | 0.8287 | 0.7550 | 0.8287 | 0.9103 | | No log | 2.6154 | 102 | 0.7378 | 0.7565 | 0.7378 | 0.8589 | | No log | 2.6667 | 104 | 0.7533 | 0.7274 | 0.7533 | 0.8679 | | No log | 2.7179 | 106 | 1.0279 | 0.6505 | 1.0279 | 1.0139 | | No log | 2.7692 | 108 | 1.3726 | 0.5206 | 1.3726 | 1.1716 | | No log | 2.8205 | 110 | 1.2890 | 0.5469 | 1.2890 | 1.1354 | | No log | 2.8718 | 112 | 0.9362 | 0.7028 | 0.9362 | 0.9676 | | No log | 2.9231 | 114 | 0.5806 | 0.6560 | 0.5806 | 0.7619 | | No log | 2.9744 | 116 | 0.4581 | 0.7305 | 0.4581 | 0.6768 | | No log | 3.0256 | 118 | 0.4685 | 0.7305 | 0.4685 | 0.6844 | | No log | 3.0769 | 120 | 0.5048 | 0.6649 | 0.5048 | 0.7105 | | No log | 3.1282 | 122 | 0.6891 | 0.6667 | 0.6891 | 0.8301 | | No log | 3.1795 | 124 | 0.9603 | 0.7104 | 0.9603 | 0.9799 | | No log | 3.2308 | 126 | 1.2026 | 0.6006 | 1.2026 | 1.0966 | | No log | 3.2821 | 128 | 1.1772 | 0.6440 | 1.1772 | 1.0850 | | No log | 3.3333 | 130 | 0.9355 | 0.6713 | 0.9355 | 0.9672 | | No log | 3.3846 | 132 | 0.8384 | 0.6902 | 0.8384 | 0.9156 | | No log | 3.4359 | 134 | 0.7438 | 0.6902 | 0.7438 | 0.8624 | | No log | 3.4872 | 136 | 0.7492 | 0.7089 | 0.7492 | 0.8656 | | No log | 3.5385 | 138 | 0.8974 | 0.7089 | 0.8974 | 0.9473 | | No log | 3.5897 | 140 | 0.9010 | 0.7089 | 0.9010 | 0.9492 | | No log | 3.6410 | 142 | 0.7665 | 0.7035 | 0.7665 | 0.8755 | | No log | 3.6923 | 144 | 0.8625 | 0.7035 | 0.8625 | 0.9287 | | No log | 3.7436 | 146 | 1.0609 | 0.7089 | 1.0609 | 1.0300 | | No log | 3.7949 | 148 | 1.2683 | 0.6387 | 1.2683 | 1.1262 | | No log | 3.8462 | 150 | 1.3860 | 0.5857 | 1.3860 | 1.1773 | | No log | 3.8974 | 152 | 1.2582 | 0.6604 | 1.2582 | 1.1217 | | No log | 3.9487 | 154 | 0.9460 | 0.7405 | 0.9460 | 0.9726 | | No log | 4.0 | 156 | 0.7367 | 0.7246 | 0.7367 | 0.8583 | | No log | 4.0513 | 158 | 0.6147 | 0.6844 | 0.6147 | 0.7841 | | No log | 4.1026 | 160 | 0.6328 | 0.6844 | 0.6328 | 0.7955 | | No log | 4.1538 | 162 | 0.8208 | 0.7217 | 0.8208 | 0.9060 | | No log | 4.2051 | 164 | 1.0473 | 0.7028 | 1.0473 | 1.0234 | | No log | 4.2564 | 166 | 1.2404 | 0.5827 | 1.2404 | 1.1137 | | No log | 4.3077 | 168 | 1.1662 | 0.7059 | 1.1662 | 1.0799 | | No log | 4.3590 | 170 | 0.8989 | 0.7246 | 0.8989 | 0.9481 | | No log | 4.4103 | 172 | 0.7310 | 0.7213 | 0.7310 | 0.8550 | | No log | 4.4615 | 174 | 0.5828 | 0.7138 | 0.5828 | 0.7634 | | No log | 4.5128 | 176 | 0.5451 | 0.7138 | 0.5451 | 0.7383 | | No log | 4.5641 | 178 | 0.6070 | 0.6942 | 0.6070 | 0.7791 | | No log | 4.6154 | 180 | 0.7602 | 0.7246 | 0.7602 | 0.8719 | | No log | 4.6667 | 182 | 0.8060 | 0.7246 | 0.8060 | 0.8977 | | No log | 4.7179 | 184 | 0.7828 | 0.7246 | 0.7828 | 0.8848 | | No log | 4.7692 | 186 | 0.7099 | 0.7246 | 0.7099 | 0.8426 | | No log | 4.8205 | 188 | 0.6491 | 0.7246 | 0.6491 | 0.8057 | | No log | 4.8718 | 190 | 0.5690 | 0.7531 | 0.5690 | 0.7543 | | No log | 4.9231 | 192 | 0.5797 | 0.7368 | 0.5797 | 0.7614 | | No log | 4.9744 | 194 | 0.6023 | 0.7368 | 0.6023 | 0.7761 | | No log | 5.0256 | 196 | 0.5716 | 0.7557 | 0.5716 | 0.7561 | | No log | 5.0769 | 198 | 0.6038 | 0.7368 | 0.6038 | 0.7770 | | No log | 5.1282 | 200 | 0.7496 | 0.7246 | 0.7496 | 0.8658 | | No log | 5.1795 | 202 | 0.8466 | 0.7246 | 0.8466 | 0.9201 | | No log | 5.2308 | 204 | 0.7752 | 0.7246 | 0.7752 | 0.8805 | | No log | 5.2821 | 206 | 0.7792 | 0.7059 | 0.7792 | 0.8827 | | No log | 5.3333 | 208 | 0.8109 | 0.7217 | 0.8109 | 0.9005 | | No log | 5.3846 | 210 | 0.9456 | 0.6837 | 0.9456 | 0.9724 | | No log | 5.4359 | 212 | 1.0314 | 0.6837 | 1.0314 | 1.0156 | | No log | 5.4872 | 214 | 1.0902 | 0.6837 | 1.0902 | 1.0441 | | No log | 5.5385 | 216 | 0.9371 | 0.7028 | 0.9371 | 0.9680 | | No log | 5.5897 | 218 | 0.8022 | 0.7059 | 0.8022 | 0.8957 | | No log | 5.6410 | 220 | 0.5897 | 0.7246 | 0.5897 | 0.7679 | | No log | 5.6923 | 222 | 0.5021 | 0.6792 | 0.5021 | 0.7086 | | No log | 5.7436 | 224 | 0.5056 | 0.7271 | 0.5056 | 0.7111 | | No log | 5.7949 | 226 | 0.5600 | 0.7089 | 0.5600 | 0.7483 | | No log | 5.8462 | 228 | 0.6668 | 0.7246 | 0.6668 | 0.8166 | | No log | 5.8974 | 230 | 0.6773 | 0.7246 | 0.6773 | 0.8230 | | No log | 5.9487 | 232 | 0.7347 | 0.7246 | 0.7347 | 0.8572 | | No log | 6.0 | 234 | 0.7081 | 0.7246 | 0.7081 | 0.8415 | | No log | 6.0513 | 236 | 0.6961 | 0.7059 | 0.6961 | 0.8343 | | No log | 6.1026 | 238 | 0.6793 | 0.7059 | 0.6793 | 0.8242 | | No log | 6.1538 | 240 | 0.6235 | 0.7043 | 0.6235 | 0.7896 | | No log | 6.2051 | 242 | 0.5881 | 0.7043 | 0.5881 | 0.7669 | | No log | 6.2564 | 244 | 0.6097 | 0.7043 | 0.6097 | 0.7808 | | No log | 6.3077 | 246 | 0.7494 | 0.7059 | 0.7494 | 0.8657 | | No log | 6.3590 | 248 | 0.8154 | 0.7059 | 0.8154 | 0.9030 | | No log | 6.4103 | 250 | 0.7366 | 0.7059 | 0.7366 | 0.8583 | | No log | 6.4615 | 252 | 0.5909 | 0.7043 | 0.5909 | 0.7687 | | No log | 6.5128 | 254 | 0.5486 | 0.7170 | 0.5486 | 0.7407 | | No log | 6.5641 | 256 | 0.5589 | 0.7343 | 0.5589 | 0.7476 | | No log | 6.6154 | 258 | 0.5803 | 0.7343 | 0.5803 | 0.7618 | | No log | 6.6667 | 260 | 0.6166 | 0.7246 | 0.6166 | 0.7852 | | No log | 6.7179 | 262 | 0.7308 | 0.7059 | 0.7308 | 0.8549 | | No log | 6.7692 | 264 | 0.7632 | 0.7059 | 0.7632 | 0.8736 | | No log | 6.8205 | 266 | 0.7008 | 0.7059 | 0.7008 | 0.8371 | | No log | 6.8718 | 268 | 0.6392 | 0.7246 | 0.6392 | 0.7995 | | No log | 6.9231 | 270 | 0.6700 | 0.7246 | 0.6700 | 0.8185 | | No log | 6.9744 | 272 | 0.7106 | 0.7059 | 0.7106 | 0.8430 | | No log | 7.0256 | 274 | 0.7540 | 0.7059 | 0.7540 | 0.8683 | | No log | 7.0769 | 276 | 0.7929 | 0.7059 | 0.7929 | 0.8904 | | No log | 7.1282 | 278 | 0.7929 | 0.7059 | 0.7929 | 0.8904 | | No log | 7.1795 | 280 | 0.7807 | 0.7059 | 0.7807 | 0.8836 | | No log | 7.2308 | 282 | 0.8130 | 0.7059 | 0.8130 | 0.9017 | | No log | 7.2821 | 284 | 0.9107 | 0.6870 | 0.9107 | 0.9543 | | No log | 7.3333 | 286 | 0.9551 | 0.6870 | 0.9551 | 0.9773 | | No log | 7.3846 | 288 | 0.9141 | 0.6870 | 0.9141 | 0.9561 | | No log | 7.4359 | 290 | 0.8090 | 0.7059 | 0.8090 | 0.8995 | | No log | 7.4872 | 292 | 0.7130 | 0.7059 | 0.7130 | 0.8444 | | No log | 7.5385 | 294 | 0.6544 | 0.7059 | 0.6544 | 0.8089 | | No log | 7.5897 | 296 | 0.6508 | 0.7059 | 0.6508 | 0.8067 | | No log | 7.6410 | 298 | 0.6553 | 0.7059 | 0.6553 | 0.8095 | | No log | 7.6923 | 300 | 0.6756 | 0.7059 | 0.6756 | 0.8220 | | No log | 7.7436 | 302 | 0.7054 | 0.7059 | 0.7054 | 0.8399 | | No log | 7.7949 | 304 | 0.6811 | 0.7246 | 0.6811 | 0.8253 | | No log | 7.8462 | 306 | 0.6276 | 0.7246 | 0.6276 | 0.7922 | | No log | 7.8974 | 308 | 0.6218 | 0.7246 | 0.6218 | 0.7885 | | No log | 7.9487 | 310 | 0.6172 | 0.7246 | 0.6172 | 0.7856 | | No log | 8.0 | 312 | 0.6130 | 0.7246 | 0.6130 | 0.7829 | | No log | 8.0513 | 314 | 0.6366 | 0.7246 | 0.6366 | 0.7979 | | No log | 8.1026 | 316 | 0.6435 | 0.7246 | 0.6435 | 0.8022 | | No log | 8.1538 | 318 | 0.6742 | 0.7246 | 0.6742 | 0.8211 | | No log | 8.2051 | 320 | 0.7341 | 0.7246 | 0.7341 | 0.8568 | | No log | 8.2564 | 322 | 0.7764 | 0.7246 | 0.7764 | 0.8812 | | No log | 8.3077 | 324 | 0.7850 | 0.7059 | 0.7850 | 0.8860 | | No log | 8.3590 | 326 | 0.7942 | 0.7059 | 0.7942 | 0.8912 | | No log | 8.4103 | 328 | 0.8383 | 0.7059 | 0.8383 | 0.9156 | | No log | 8.4615 | 330 | 0.8446 | 0.7059 | 0.8446 | 0.9190 | | No log | 8.5128 | 332 | 0.8369 | 0.7059 | 0.8369 | 0.9148 | | No log | 8.5641 | 334 | 0.8209 | 0.7059 | 0.8209 | 0.9060 | | No log | 8.6154 | 336 | 0.7812 | 0.7059 | 0.7812 | 0.8838 | | No log | 8.6667 | 338 | 0.7391 | 0.7059 | 0.7391 | 0.8597 | | No log | 8.7179 | 340 | 0.7230 | 0.7059 | 0.7230 | 0.8503 | | No log | 8.7692 | 342 | 0.6974 | 0.7246 | 0.6974 | 0.8351 | | No log | 8.8205 | 344 | 0.6841 | 0.7246 | 0.6841 | 0.8271 | | No log | 8.8718 | 346 | 0.6848 | 0.7246 | 0.6848 | 0.8276 | | No log | 8.9231 | 348 | 0.7076 | 0.7246 | 0.7076 | 0.8412 | | No log | 8.9744 | 350 | 0.7213 | 0.7246 | 0.7213 | 0.8493 | | No log | 9.0256 | 352 | 0.7444 | 0.7246 | 0.7444 | 0.8628 | | No log | 9.0769 | 354 | 0.7843 | 0.7246 | 0.7843 | 0.8856 | | No log | 9.1282 | 356 | 0.7986 | 0.7246 | 0.7986 | 0.8936 | | No log | 9.1795 | 358 | 0.8085 | 0.7246 | 0.8085 | 0.8992 | | No log | 9.2308 | 360 | 0.7950 | 0.7246 | 0.7950 | 0.8916 | | No log | 9.2821 | 362 | 0.7717 | 0.7246 | 0.7717 | 0.8785 | | No log | 9.3333 | 364 | 0.7587 | 0.7246 | 0.7587 | 0.8710 | | No log | 9.3846 | 366 | 0.7433 | 0.7246 | 0.7433 | 0.8621 | | No log | 9.4359 | 368 | 0.7457 | 0.7246 | 0.7457 | 0.8635 | | No log | 9.4872 | 370 | 0.7388 | 0.7246 | 0.7388 | 0.8595 | | No log | 9.5385 | 372 | 0.7329 | 0.7246 | 0.7329 | 0.8561 | | No log | 9.5897 | 374 | 0.7219 | 0.7246 | 0.7219 | 0.8496 | | No log | 9.6410 | 376 | 0.7144 | 0.7246 | 0.7144 | 0.8452 | | No log | 9.6923 | 378 | 0.7092 | 0.7246 | 0.7092 | 0.8422 | | No log | 9.7436 | 380 | 0.7106 | 0.7246 | 0.7106 | 0.8429 | | No log | 9.7949 | 382 | 0.7180 | 0.7246 | 0.7180 | 0.8474 | | No log | 9.8462 | 384 | 0.7267 | 0.7246 | 0.7267 | 0.8525 | | No log | 9.8974 | 386 | 0.7326 | 0.7246 | 0.7326 | 0.8559 | | No log | 9.9487 | 388 | 0.7372 | 0.7246 | 0.7372 | 0.8586 | | No log | 10.0 | 390 | 0.7382 | 0.7246 | 0.7382 | 0.8592 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.0+cu118 - Datasets 2.21.0 - Tokenizers 0.19.1
gaianet/GritLM-7B-GGUF
gaianet
"2024-04-18T05:39:25Z"
2
0
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
"2024-04-15T05:54:01Z"
# GritLM-7B-GGUF Original model: [GritLM/GritLM-8x7B](https://huggingface.co/GritLM/GritLM-8x7B) Use llama.cpp's conversion and quantization scripts.
TransferGraph/jb2k_bert-base-multilingual-cased-language-detection-finetuned-lora-tweet_eval_hate
TransferGraph
"2024-02-29T13:44:07Z"
0
0
peft
[ "peft", "safetensors", "parquet", "text-classification", "dataset:tweet_eval", "base_model:jb2k/bert-base-multilingual-cased-language-detection", "base_model:adapter:jb2k/bert-base-multilingual-cased-language-detection", "model-index", "region:us" ]
text-classification
"2024-02-29T13:44:05Z"
--- library_name: peft tags: - parquet - text-classification datasets: - tweet_eval metrics: - accuracy base_model: jb2k/bert-base-multilingual-cased-language-detection model-index: - name: jb2k_bert-base-multilingual-cased-language-detection-finetuned-lora-tweet_eval_hate results: - task: type: text-classification name: Text Classification dataset: name: tweet_eval type: tweet_eval config: hate split: validation args: hate metrics: - type: accuracy value: 0.682 name: accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # jb2k_bert-base-multilingual-cased-language-detection-finetuned-lora-tweet_eval_hate This model is a fine-tuned version of [jb2k/bert-base-multilingual-cased-language-detection](https://huggingface.co/jb2k/bert-base-multilingual-cased-language-detection) on the tweet_eval dataset. It achieves the following results on the evaluation set: - accuracy: 0.682 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0004 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | accuracy | train_loss | epoch | |:--------:|:----------:|:-----:| | 0.573 | None | 0 | | 0.676 | 0.6360 | 0 | | 0.686 | 0.5747 | 1 | | 0.663 | 0.5399 | 2 | | 0.682 | 0.5196 | 3 | ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.2.0 - Datasets 2.16.1 - Tokenizers 0.15.2
draziert/LunarLander-v2-ppo-scratch-v2
draziert
"2023-08-14T01:57:37Z"
0
0
null
[ "tensorboard", "LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
"2023-08-14T01:57:25Z"
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 250.26 +/- 36.82 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'exp_name': 'ppo' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'LunarLander-v2' 'total_timesteps': 2500000 'learning_rate': 0.00025 'num_envs': 16 'num_steps': 512 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.98 'num_minibatches': 256 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': 0.015 'repo_id': 'draziert/LunarLander-v2-ppo-scratch-v2' 'batch_size': 8192 'minibatch_size': 32} ```
habulaj/1712254022250x576568139924306300
habulaj
"2024-04-04T19:00:09Z"
1
0
diffusers
[ "diffusers", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "dataset:Theuzs/Carnage_Raimi", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "region:us" ]
text-to-image
"2024-04-04T18:07:12Z"
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: A TOK character tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora inference: false datasets: - Theuzs/Carnage_Raimi --- # LoRA DreamBooth - squaadinc/1712254022250x576568139924306300 These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0 trained on @fffiloni's SD-XL trainer. The weights were trained on the concept prompt: ``` A TOK character ``` Use this keyword to trigger your custom model in your prompts. LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Usage Make sure to upgrade diffusers to >= 0.19.0: ``` pip install diffusers --upgrade ``` In addition make sure to install transformers, safetensors, accelerate as well as the invisible watermark: ``` pip install invisible_watermark transformers accelerate safetensors ``` To just use the base model, you can run: ```python import torch from diffusers import DiffusionPipeline, AutoencoderKL device = "cuda" if torch.cuda.is_available() else "cpu" vae = AutoencoderKL.from_pretrained('madebyollin/sdxl-vae-fp16-fix', torch_dtype=torch.float16) pipe = DiffusionPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0", vae=vae, torch_dtype=torch.float16, variant="fp16", use_safetensors=True ) pipe.to(device) # This is where you load your trained weights specific_safetensors = "pytorch_lora_weights.safetensors" lora_scale = 0.9 pipe.load_lora_weights( 'squaadinc/1712254022250x576568139924306300', weight_name = specific_safetensors, # use_auth_token = True ) prompt = "A majestic A TOK character jumping from a big stone at night" image = pipe( prompt=prompt, num_inference_steps=50, cross_attention_kwargs={"scale": lora_scale} ).images[0] ```
rcarey/bloomz-560m_PROMPT_TUNING_CAUSAL_LM
rcarey
"2023-12-01T18:47:37Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:bigscience/bloomz-560m", "base_model:adapter:bigscience/bloomz-560m", "region:us" ]
null
"2023-11-30T23:00:17Z"
--- library_name: peft base_model: bigscience/bloomz-560m --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure ### Framework versions - PEFT 0.6.2
conorgee/FEW_SHOT_bloomz-560m_PROMPT_TUNING_CAUSAL_LM
conorgee
"2024-03-13T17:17:11Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-03-13T17:17:09Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
BobMcDear/xcit_small24_patch16_224_dist
BobMcDear
"2023-01-15T19:45:03Z"
0
0
null
[ "region:us" ]
null
"2023-01-15T14:27:13Z"
Please refer to [flaim](https://github.com/bobmcdear/flaim) for sample usage and more information.
ReadyArt/EVA-Qwen2.5-14B-v0.2_EXL2_6.0bpw_H8
ReadyArt
"2025-01-24T22:31:35Z"
6
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "conversational", "dataset:anthracite-org/kalo-opus-instruct-22k-no-refusal", "dataset:Nopm/Opus_WritingStruct", "dataset:Gryphe/Sonnet3.5-SlimOrcaDedupCleaned", "dataset:Gryphe/Sonnet3.5-Charcard-Roleplay", "dataset:Gryphe/ChatGPT-4o-Writing-Prompts", "dataset:Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned", "dataset:Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned", "dataset:nothingiisreal/Reddit-Dirty-And-WritingPrompts", "dataset:allura-org/Celeste-1.x-data-mixture", "dataset:cognitivecomputations/dolphin-2.9.3", "base_model:Qwen/Qwen2.5-14B", "base_model:quantized:Qwen/Qwen2.5-14B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "6-bit", "exl2", "region:us" ]
text-generation
"2025-01-24T22:28:07Z"
--- library_name: transformers license: apache-2.0 base_model: Qwen/Qwen2.5-14B datasets: - anthracite-org/kalo-opus-instruct-22k-no-refusal - Nopm/Opus_WritingStruct - Gryphe/Sonnet3.5-SlimOrcaDedupCleaned - Gryphe/Sonnet3.5-Charcard-Roleplay - Gryphe/ChatGPT-4o-Writing-Prompts - Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned - Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned - nothingiisreal/Reddit-Dirty-And-WritingPrompts - allura-org/Celeste-1.x-data-mixture - cognitivecomputations/dolphin-2.9.3 tags: - generated_from_trainer model-index: - name: EVA-Qwen2.5-14B-SFFT-v0.2 results: [] --- # EVA Qwen2.5-14B v0.2 <p> A RP/storywriting specialist model, full-parameter finetune of Qwen2.5-14B on mixture of synthetic and natural data.<br> It uses Celeste 70B 0.1 data mixture, greatly expanding it to improve versatility, creativity and "flavor" of the resulting model.<br> </p> <p><b>Version notes for 0.2</b>: Now using the refined dataset from 32B 0.2. Major improvements in coherence, instruction following and long-context comprehension over 14B v0.1.</p> <p> <p>Prompt format is ChatML.</p><br> <h3>Recommended sampler values:</h3> <ul> <li>Temperature: 0.8</li> <li>Min-P: 0.05</li> <li>Top-A: 0.3</li> <li>Repetition Penalty: 1.03</li> </ul> <h3>Recommended SillyTavern presets (via CalamitousFelicitousness):</h3> - [Context](https://huggingface.co/EVA-UNIT-01/EVA-Yi-1.5-9B-32K-V1/blob/main/%5BChatML%5D%20Roleplay-v1.9%20Context.json) - [Instruct and System Prompt](https://huggingface.co/EVA-UNIT-01/EVA-Yi-1.5-9B-32K-V1/blob/main/%5BChatML%5D%20Roleplay-v1.9%20Instruct.json) </p> <p> <br> <h3> Training data: </h3> <ul> <li>Celeste 70B 0.1 data mixture minus Opus Instruct subset. See that model's <a href=https://huggingface.co/nothingiisreal/L3.1-70B-Celeste-V0.1-BF16>card</a> for details.</li> <li>Kalomaze's Opus_Instruct_25k dataset, filtered for refusals.</li> <li>A subset (1k rows) of ChatGPT-4o-WritingPrompts by Gryphe</li> <li>A subset (2k rows) of Sonnet3.5-Charcards-Roleplay by Gryphe</li> <li>Synthstruct and SynthRP datasets by Epiculous</li> <li>A subset from Dolphin-2.9.3, including filtered version of not_samantha and a small subset of systemchat.</li> </ul> <h3> Training time and hardware: </h3> <ul><li> 3 hours on 8xH100 SXM, provided by <a href=https://featherless.ai/>FeatherlessAI</a></li></ul><br> </p> <p>Model was created by Kearm, Auri and Cahvay.</p> <h4>Special thanks:</h4><ul> <li><b>to Cahvay for his work on investigating and reprocessing the corrupted dataset, removing the single biggest source of data poisoning.</b></li> <li><b>to <a href=https://featherless.ai/>FeatherlessAI</a> for generously providing 8xH100 SXM node for training of this model</b></li> <li>to Gryphe, Lemmy, Kalomaze, Nopm, Epiculous and CognitiveComputations for the data</li> <li>and to Allura-org for support, feedback, beta-testing and doing quality control of EVA models.</li></ul> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml base_model: Qwen/Qwen2.5-14B load_in_8bit: false load_in_4bit: false strict: false plugins: - axolotl.integrations.liger.LigerPlugin liger_rope: true liger_rms_norm: true liger_swiglu: true liger_fused_linear_cross_entropy: true # plugins: # - axolotl.integrations.spectrum.SpectrumPlugin # spectrum_top_fraction: 0.5 # # Optional if using a pre-scanned model as your base_model. Useful if using a model mirror # spectrum_model_name: Qwen/Qwen2.5-32B datasets: - path: datasets/Celeste_Filtered_utf8fix.jsonl type: sharegpt - path: datasets/deduped_not_samantha_norefusals.jsonl type: sharegpt - path: datasets/deduped_SynthRP-Gens_processed_ShareGPT_converted_cleaned.jsonl type: sharegpt - path: datasets/deduped_Synthstruct-Gens_processed_sharegpt_converted_cleaned.jsonl type: sharegpt - path: datasets/Gryphe-4o-WP-filtered-sharegpt_utf8fix.jsonl type: sharegpt - path: datasets/opus-instruct-22k-no_refusals-filtered_utf8fix.jsonl type: sharegpt - path: datasets/Sonnet3-5-charcard-names-filtered-sharegpt_utf8fix.jsonl type: sharegpt - path: datasets/SystemChat_subset_filtered_sharegpt_utf8fix.jsonl type: sharegpt chat_template: chatml shuffle_merged_datasets: true val_set_size: 0.001 output_dir: ./EVA-Qwen2.5-14B-SFFT-v0.2 sequence_len: 10240 sample_packing: true eval_sample_packing: false pad_to_sequence_len: true # adapter: qlora # lora_model_dir: # lora_r: 64 # lora_alpha: 128 # lora_dropout: 0.05 # lora_target_linear: true # peft_use_dora: true base_model: Qwen/Qwen2.5-14B load_in_8bit: false load_in_4bit: false strict: false plugins: - axolotl.integrations.liger.LigerPlugin liger_rope: true liger_rms_norm: true liger_swiglu: true liger_fused_linear_cross_entropy: true datasets: - path: datasets/Celeste_Filtered_utf8fix.jsonl type: sharegpt - path: datasets/deduped_not_samantha_norefusals.jsonl type: sharegpt - path: datasets/deduped_SynthRP-Gens_processed_ShareGPT_converted_cleaned.jsonl type: sharegpt - path: datasets/deduped_Synthstruct-Gens_processed_sharegpt_converted_cleaned.jsonl type: sharegpt - path: datasets/Gryphe-4o-WP-filtered-sharegpt_utf8fix.jsonl type: sharegpt - path: datasets/opus-instruct-22k-no_refusals-filtered_utf8fix.jsonl type: sharegpt - path: datasets/Sonnet3-5-charcard-names-filtered-sharegpt_utf8fix.jsonl type: sharegpt - path: datasets/SystemChat_subset_filtered_sharegpt_utf8fix.jsonl type: sharegpt chat_template: chatml shuffle_merged_datasets: true val_set_size: 0.005 output_dir: ./EVA-Qwen2.5-14B-SFFT-v0.2 sequence_len: 10240 sample_packing: true eval_sample_packing: false pad_to_sequence_len: true # adapter: qlora # lora_model_dir: # lora_r: 32 # lora_alpha: 16 # lora_dropout: 0.05 # lora_target_linear: true # peft_use_dora: true unfrozen_parameters: - ^lm_head.weight$ - ^model.embed_tokens.weight$ # mlp.down_proj layers - model.layers.1.mlp.down_proj - model.layers.35.mlp.down_proj - model.layers.38.mlp.down_proj - model.layers.37.mlp.down_proj - model.layers.36.mlp.down_proj - model.layers.15.mlp.down_proj - model.layers.11.mlp.down_proj - model.layers.12.mlp.down_proj - model.layers.34.mlp.down_proj - model.layers.44.mlp.down_proj - model.layers.45.mlp.down_proj - model.layers.9.mlp.down_proj - model.layers.41.mlp.down_proj - model.layers.33.mlp.down_proj - model.layers.43.mlp.down_proj - model.layers.40.mlp.down_proj - model.layers.13.mlp.down_proj - model.layers.8.mlp.down_proj - model.layers.39.mlp.down_proj - model.layers.10.mlp.down_proj - model.layers.14.mlp.down_proj - model.layers.16.mlp.down_proj - model.layers.31.mlp.down_proj - model.layers.32.mlp.down_proj # mlp.gate_proj layers - model.layers.1.mlp.gate_proj - model.layers.44.mlp.gate_proj - model.layers.46.mlp.gate_proj - model.layers.45.mlp.gate_proj - model.layers.43.mlp.gate_proj - model.layers.47.mlp.gate_proj - model.layers.42.mlp.gate_proj - model.layers.32.mlp.gate_proj - model.layers.27.mlp.gate_proj - model.layers.33.mlp.gate_proj - model.layers.28.mlp.gate_proj - model.layers.39.mlp.gate_proj - model.layers.41.mlp.gate_proj - model.layers.40.mlp.gate_proj - model.layers.30.mlp.gate_proj - model.layers.29.mlp.gate_proj - model.layers.31.mlp.gate_proj - model.layers.37.mlp.gate_proj - model.layers.26.mlp.gate_proj - model.layers.10.mlp.gate_proj - model.layers.38.mlp.gate_proj - model.layers.36.mlp.gate_proj - model.layers.12.mlp.gate_proj - model.layers.13.mlp.gate_proj # mlp.up_proj layers - model.layers.1.mlp.up_proj - model.layers.13.mlp.up_proj - model.layers.11.mlp.up_proj - model.layers.14.mlp.up_proj - model.layers.15.mlp.up_proj - model.layers.12.mlp.up_proj - model.layers.8.mlp.up_proj - model.layers.16.mlp.up_proj - model.layers.9.mlp.up_proj - model.layers.19.mlp.up_proj - model.layers.10.mlp.up_proj - model.layers.7.mlp.up_proj - model.layers.17.mlp.up_proj - model.layers.20.mlp.up_proj - model.layers.21.mlp.up_proj - model.layers.18.mlp.up_proj - model.layers.37.mlp.up_proj - model.layers.38.mlp.up_proj - model.layers.39.mlp.up_proj - model.layers.42.mlp.up_proj - model.layers.41.mlp.up_proj - model.layers.27.mlp.up_proj - model.layers.28.mlp.up_proj - model.layers.36.mlp.up_proj # self_attn.k_proj layers - model.layers.47.self_attn.k_proj - model.layers.39.self_attn.k_proj - model.layers.41.self_attn.k_proj - model.layers.37.self_attn.k_proj - model.layers.35.self_attn.k_proj - model.layers.44.self_attn.k_proj - model.layers.38.self_attn.k_proj - model.layers.14.self_attn.k_proj - model.layers.7.self_attn.k_proj - model.layers.12.self_attn.k_proj - model.layers.11.self_attn.k_proj - model.layers.32.self_attn.k_proj - model.layers.10.self_attn.k_proj - model.layers.8.self_attn.k_proj - model.layers.6.self_attn.k_proj - model.layers.9.self_attn.k_proj - model.layers.45.self_attn.k_proj - model.layers.42.self_attn.k_proj - model.layers.40.self_attn.k_proj - model.layers.5.self_attn.k_proj - model.layers.0.self_attn.k_proj - model.layers.33.self_attn.k_proj - model.layers.34.self_attn.k_proj - model.layers.13.self_attn.k_proj # self_attn.o_proj layers - model.layers.12.self_attn.o_proj - model.layers.5.self_attn.o_proj - model.layers.14.self_attn.o_proj - model.layers.16.self_attn.o_proj - model.layers.20.self_attn.o_proj - model.layers.13.self_attn.o_proj - model.layers.11.self_attn.o_proj - model.layers.4.self_attn.o_proj - model.layers.6.self_attn.o_proj - model.layers.19.self_attn.o_proj - model.layers.7.self_attn.o_proj - model.layers.18.self_attn.o_proj - model.layers.8.self_attn.o_proj - model.layers.38.self_attn.o_proj - model.layers.15.self_attn.o_proj - model.layers.17.self_attn.o_proj - model.layers.9.self_attn.o_proj - model.layers.10.self_attn.o_proj - model.layers.21.self_attn.o_proj - model.layers.28.self_attn.o_proj - model.layers.32.self_attn.o_proj - model.layers.35.self_attn.o_proj - model.layers.39.self_attn.o_proj - model.layers.3.self_attn.o_proj # self_attn.q_proj layers - model.layers.1.self_attn.q_proj - model.layers.2.self_attn.q_proj - model.layers.3.self_attn.q_proj - model.layers.44.self_attn.q_proj - model.layers.29.self_attn.q_proj - model.layers.45.self_attn.q_proj - model.layers.43.self_attn.q_proj - model.layers.32.self_attn.q_proj - model.layers.38.self_attn.q_proj - model.layers.19.self_attn.q_proj - model.layers.42.self_attn.q_proj - model.layers.34.self_attn.q_proj - model.layers.36.self_attn.q_proj - model.layers.40.self_attn.q_proj - model.layers.26.self_attn.q_proj - model.layers.20.self_attn.q_proj - model.layers.28.self_attn.q_proj - model.layers.39.self_attn.q_proj - model.layers.41.self_attn.q_proj - model.layers.33.self_attn.q_proj - model.layers.35.self_attn.q_proj - model.layers.25.self_attn.q_proj - model.layers.30.self_attn.q_proj - model.layers.27.self_attn.q_proj # self_attn.v_proj layers - model.layers.0.self_attn.v_proj - model.layers.7.self_attn.v_proj - model.layers.39.self_attn.v_proj - model.layers.31.self_attn.v_proj - model.layers.15.self_attn.v_proj - model.layers.10.self_attn.v_proj - model.layers.41.self_attn.v_proj - model.layers.32.self_attn.v_proj - model.layers.6.self_attn.v_proj - model.layers.33.self_attn.v_proj - model.layers.42.self_attn.v_proj - model.layers.29.self_attn.v_proj - model.layers.9.self_attn.v_proj - model.layers.14.self_attn.v_proj - model.layers.35.self_attn.v_proj - model.layers.38.self_attn.v_proj - model.layers.13.self_attn.v_proj - model.layers.30.self_attn.v_proj - model.layers.34.self_attn.v_proj - model.layers.5.self_attn.v_proj - model.layers.28.self_attn.v_proj - model.layers.37.self_attn.v_proj - model.layers.27.self_attn.v_proj - model.layers.11.self_attn.v_proj wandb_project: EVA-Qwen2.5-14B-SFFT-v0.2 wandb_entity: wandb_watch: wandb_name: Unit-02 wandb_log_model: gradient_accumulation_steps: 8 micro_batch_size: 2 num_epochs: 3 optimizer: paged_ademamix_8bit lr_scheduler: cosine learning_rate: 0.00005 max_grad_norm: 3 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: "unsloth" # gradient_checkpointing_kwargs: # use_reentrant: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 20 evals_per_epoch: 4 saves_per_epoch: 4 save_safetensors: true hub_model_id: hub_strategy: debug: deepspeed: deepspeed_configs/zero3_bf16.json weight_decay: 0.1 # fsdp: # - full_shard # - auto_wrap # fsdp_config: # fsdp_limit_all_gathers: true # fsdp_sync_module_states: false # fsdp_offload_params: true # fsdp_cpu_ram_efficient_loading: true # fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP # fsdp_transformer_layer_cls_to_wrap: Qwen2DecoderLayer # fsdp_activation_checkpointing: true # fsdp_state_dict_type: SHARDED_STATE_DICT # Changed from FULL_STATE_DICT # fsdp_sharding_strategy: FULL_SHARD # fsdp_forward_prefetch: false # Added # fsdp_backward_prefetch: "BACKWARD_PRE" # Added # fsdp_backward_prefetch_limit: 1 # Added # fsdp_mixed_precision: BF16 # Added ``` </details><br>
MaziyarPanahi/Experiment28M7_Experiment26Experiment26
MaziyarPanahi
"2024-04-08T15:46:32Z"
18
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "Safetensors", "text-generation-inference", "merge", "base_model:automerger/Experiment26Experiment26-7B", "base_model:merge:automerger/Experiment26Experiment26-7B", "base_model:automerger/Experiment28M7-7B", "base_model:merge:automerger/Experiment28M7-7B", "license:apache-2.0", "autotrain_compatible", "region:us" ]
text-generation
"2024-04-08T15:34:00Z"
--- license: apache-2.0 tags: - Safetensors - text-generation-inference - merge model_name: Experiment28M7_Experiment26Experiment26 base_model: - automerger/Experiment28M7-7B - automerger/Experiment26Experiment26-7B inference: false model_creator: MaziyarPanahi pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # Experiment28M7_Experiment26Experiment26 Experiment28M7_Experiment26Experiment26 is a merge of the following models: * [automerger/Experiment28M7-7B](https://huggingface.co/automerger/Experiment28M7-7B) * [automerger/Experiment26Experiment26-7B](https://huggingface.co/automerger/Experiment26Experiment26-7B) ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "MaziyarPanahi/Experiment28M7_Experiment26Experiment26" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
thebirdmanjax/VoidedRealityV1
thebirdmanjax
"2023-05-09T07:39:39Z"
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
"2023-05-09T07:39:02Z"
--- license: creativeml-openrail-m --- thebirdmanjax/VoidedRealityV1 These are some of my favorite Realism Models , and wanted to create something geared to what Style I needed to not have to change model so much. Once Again all credit to the orginal model creators i have linked below Model Description All Credit Goes to the Original Model Creators which will be listed below. Uploaded For Personal Use <This a Blended Model with its main base Of SG161222/Realistic_Vision_V2.0 Then added one at a time into the previous blend . 65% SG161222/Realistic_Vision_V2.0 + 35% XpucT/Deliberate Here is the method: 65% SG161222/Realistic_Vision_V2.0 + 35% XpucT/Deliberat rea/delib + 25%Fantastimix real/delib/fant +20% clarity rea/delib/fant/clarity/+20% lyriel real/delib/fant/clarity/lyriel + 15% lazymix Final Modeel =thebirdmanjax/VoidedRealityV1 Developed by: [More Information Needed] Shared by [optional]: [More Information Needed] Model type: [More Information Needed] Language(s) (NLP): [More Information Needed] License: [More Information Needed] Finetuned from model [optional]: [More Information Needed] Model Sources [optional] https://civitai.com/models/5062/clarity https://civitai.com/models/22402/fantasticmix https://civitai.com/models/22922/lyriel https://civitai.com/models/10961/lazymix-real-amateur-nudes Repository: [More Information Needed] Paper [optional]: [More Information Needed] Demo [optional]: [More Information Needed] Uses Thie mix is great at realism ,and has some element of pretty decent art ability , it seems to enjoy diff sizes of photos so play around with the initial pic size also eular a seems to work best Sample Prompt rule of thirds:1.2) wide angle view ,200mm lens,highly detailed,( full length body shot:1.2) RAW photo of beautiful (1girl) naked 32 year old by Marco Grob and Tom Munro (female model :1.4) (Amateur:1.2), (highly detailed hair:1.3) (BDSM:1.2),dungeon background,(skin pores, skin imperfection:1.2), (beautiful eye color :0.8), highly detailed body, (suspended:1.4), (shibari:1.2), (tied up:1.2) highly detailed face, (realistic soft cinematic lighting:1.2), 8k high definition, insanely detailed, intricate detail, (Highly Detailed Skin:1.4) (masterpiece:1.2), highest quality, trending on artstation, UHD, DSLR,film grain, (suspension1.4), (legs spread open:1.2), centered , Neg Prompt upclose ,panites , bra , covered vagina ,jpeg artifacts , headshot , improper crop, Headshot Portrait , asian ,up close , bad crop , too close , way to close zoomed in , up close , Negative prompt: no asshole, missing asshole , not showing asshole , boring background, inside , not realistic ,fake looking skin , non human skin , plastic looking skin ,((((((hands ))))), ((((inverted nipples))))) , (((OUT OF FRAME ))) teen , child ,younger , wrong age , different nipples , out of frame , no rule of thirds , jpeg artifacts , clothing , extra limbs , anatomically incorrect, extra body parts, hands , out of frame , bad crop, mutated hands , fused fingers , fused hand ,inverted nipple , weird nipple, black sport on nipple , unfinished nipple , breasts different sizes, nipples are not identical , young , child ,ee (deformed iris, deformed pupils, semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, anime:1.4) , text, close up, cropped, out of frame, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, extra legs, crossed legs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, touching legs, extra legs, fused fingers, too many fingers, long neck, Stats Steps: 33, Sampler: Euler a, CFG scale: 7, Seed: 1901051343, Face restoration: GFPGAN, Size: 512x520, Model hash: f420cc1303, Model: RealDeliberateVoidedClarity Denoising strength: 0.7, Hires upscale: 2, Hires upscaler: Latent (antialiased), AddNet Enabled: True, AddNet Module 1: LoRA, AddNet Model 1: suspension(e027f683f29a), AddNet Weight A 1: 0.75, AddNet Weight B 1: 0.75
lesso03/f0a23870-b0f4-4640-b359-4ddb8323da3e
lesso03
"2025-03-16T13:35:00Z"
0
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:Qwen/Qwen2-7B-Instruct", "base_model:adapter:Qwen/Qwen2-7B-Instruct", "license:apache-2.0", "region:us" ]
null
"2025-03-16T11:40:21Z"
--- library_name: peft license: apache-2.0 base_model: Qwen/Qwen2-7B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: f0a23870-b0f4-4640-b359-4ddb8323da3e results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: Qwen/Qwen2-7B-Instruct bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - e2889cf7c4e2be93_train_data.json ds_type: json format: custom path: /workspace/input_data/e2889cf7c4e2be93_train_data.json type: field_input: input field_instruction: instruction field_output: output format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null do_eval: true early_stopping_patience: 3 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 500 evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 8 gradient_checkpointing: true group_by_length: true hub_model_id: lesso03/f0a23870-b0f4-4640-b359-4ddb8323da3e hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.000203 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 50 lora_alpha: 128 lora_dropout: 0.15 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_steps: 500 micro_batch_size: 4 mlflow_experiment_name: /tmp/e2889cf7c4e2be93_train_data.json model_type: AutoModelForCausalLM num_epochs: 10 optimizer: adamw_torch_fused output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 500 saves_per_epoch: null seed: 30 sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 31cd4a2f-085b-4e17-8e3f-653c344b95bc wandb_project: 03a wandb_run: your_name wandb_runid: 31cd4a2f-085b-4e17-8e3f-653c344b95bc warmup_steps: 100 weight_decay: 0.0 xformers_attention: null ``` </details><br> # f0a23870-b0f4-4640-b359-4ddb8323da3e This model is a fine-tuned version of [Qwen/Qwen2-7B-Instruct](https://huggingface.co/Qwen/Qwen2-7B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8439 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.000203 - train_batch_size: 4 - eval_batch_size: 4 - seed: 30 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 500 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0003 | 1 | 2.1160 | | 1.8351 | 0.1512 | 500 | 1.8439 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
AlexWortega/vikhr-7b-v4-chatml-veles
AlexWortega
"2024-04-09T14:33:59Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:AlexWortega/v4", "base_model:adapter:AlexWortega/v4", "region:us" ]
null
"2024-04-09T04:57:46Z"
--- library_name: peft base_model: AlexWortega/v4 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.0
bunoheb/main_emotion_classifier_4
bunoheb
"2024-03-20T17:06:58Z"
180
0
transformers
[ "transformers", "safetensors", "roberta", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-03-20T17:06:00Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Krisshvamsi/TTS
Krisshvamsi
"2024-05-03T17:50:23Z"
4
1
speechbrain
[ "speechbrain", "text-to-speech", "TTS", "speech-synthesis", "Tacotron2", "en", "dataset:LJSpeech", "arxiv:1809.08895", "arxiv:2106.04624", "license:apache-2.0", "region:us" ]
text-to-speech
"2024-04-21T17:40:28Z"
--- language: en tags: - text-to-speech - TTS - speech-synthesis - Tacotron2 - speechbrain license: apache-2.0 datasets: - LJSpeech metrics: - mos pipeline_tag: text-to-speech library_name: speechbrain --- <iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe> <br/><br/> # Text-to-Speech (TTS) with Transformer trained on LJSpeech This repository provides all the necessary tools for Text-to-Speech (TTS) with SpeechBrain using a [Transformer](https://arxiv.org/pdf/1809.08895.pdf) pretrained on [LJSpeech](https://keithito.com/LJ-Speech-Dataset/). The pre-trained model takes in input a short text and produces a spectrogram in output. One can get the final waveform by applying a vocoder (e.g., HiFIGAN) on top of the generated spectrogram. ### Perform Text-to-Speech (TTS) - Running Inference To run model inference pull the interface directory as shown in the cell below Note: Run on T4-GPU for faster inference ``` !pip install --upgrade --no-cache-dir gdown !gdown 1oy8Y5zwkLel7diA63GNCD-6cfoBV4tq7 !unzip inference.zip ``` ```python %%capture !pip install speechbrain %cd inference ``` ```python import torchaudio from TTSModel import TTSModel from IPython.display import Audio from speechbrain.inference.vocoders import HIFIGAN texts = ["This is a sample text for synthesis."] model_source_path = "/content/inference" # Intialize TTS (Transformer) and Vocoder (HiFIGAN) my_tts_model = TTSModel.from_hparams(source=model_source_path) hifi_gan = HIFIGAN.from_hparams(source="speechbrain/tts-hifigan-ljspeech", savedir="tmpdir_vocoder") # Running the TTS mel_output = my_tts_model.encode_text(texts) # Running Vocoder (spectrogram-to-waveform) waveforms = hifi_gan.decode_batch(mel_output) # Save the waverform torchaudio.save('example_TTS.wav',waveforms.squeeze(1), 22050) print("Saved the audio file!") ``` If you want to generate multiple sentences in one-shot, pass the sentences as items in a list. ### Inference on GPU To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method. Note: For Training the model please visit this [TTS_Training_Inference](https://colab.research.google.com/drive/1VYu4kXdgpv7f742QGquA1G4ipD2Kg0kT?usp=sharing) notebook ### Limitations The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets. # **About SpeechBrain** - Website: https://speechbrain.github.io/ - Code: https://github.com/speechbrain/speechbrain/ - HuggingFace: https://huggingface.co/speechbrain/ # **Citing SpeechBrain** Please, cite SpeechBrain if you use it for your research or business. ```bibtex @misc{speechbrain, title={{SpeechBrain}: A General-Purpose Speech Toolkit}, author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio}, year={2021}, eprint={2106.04624}, archivePrefix={arXiv}, primaryClass={eess.AS}, note={arXiv:2106.04624} } ```
scampion/eubert_embedding_v1
scampion
"2023-12-01T16:58:26Z"
343
2
transformers
[ "transformers", "safetensors", "roberta", "feature-extraction", "license:eupl-1.1", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
"2023-11-29T17:01:38Z"
--- license: eupl-1.1 --- 👷‍♂️ Work in progress # EUBERT Embedding v1 Based on the masked language model EUBERT this sentence transformer will allow to compute embeddings on various EU documents in 24 languages. - Number of dimensions: 768 - Pre-trained model: EUBERT - Finetuned dataset: AllNLI ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer('EuropeanParliament/eubert_embedding_v1') vector = model.encode("Based on the masked language model EUBERT this sentence transformer will allow to compute embeddings on various EU documents in 24 languages.") ``` Evaluation and benchmarking are welcome | task | dataset | name | config | split | revision | accuracy | ap | f1 | v_measure | |-|-|-|-|-|-|-|-|-|-| | Classification | mteb/amazon_counterfactual | MTEB AmazonCounterfactualClassification (en) | en | test | e8379541af4e31359cca9fbcf4b00f2671dba205 | 65.46268656716417 | 28.448646125211685 | 59.381505835828655 | | | Classification | mteb/amazon_polarity | MTEB AmazonPolarityClassification | default | test | e2d317d38cd51312af73b3d32a06d1a08b442046 | 66.46035 | 61.29404861567824 | 66.33660156778977 | | | Classification | mteb/amazon_reviews_multi | MTEB AmazonReviewsClassification (en) | en | test | 1399c76144fd37290681b995c656ef9b2e06e26d | 33.002 | | 32.703439998458286 | | | Clustering | mteb/arxiv-clustering-p2p | MTEB ArxivClusteringP2P | default | test | a122ad7f3f0291bf49cc6f4d32aa80929df69d5d | | | | 26.726296122407874 | | Classification | mteb/banking77 | MTEB Banking77Classification | default | test | 0fd18e25b25c072e09e0d92ab615fda904d66300 | 72.07792207792207 | | 72.00698905672714 | | | Classification | mteb/emotion | MTEB EmotionClassification | default | test | 4f58c6b202a23cf9a4da393831edf4f9183cad37 | 25.45 | | 22.489051015009604 | | | Classification | mteb/imdb | MTEB ImdbClassification | default | test | 3d86128a09e091d6018b6d26cad27f2739fc2db7 | 61.0288 | 56.84210754735158 | 60.72244426285243 | | | Classification | mteb/mtop_domain | MTEB MTOPDomainClassification (en) | en | test | d80d48c1eb48d3562165c59d59d0034df9fff0bf | 78.63657090743274 | | 77.33756273016937 | | | Classification | mteb/mtop_domain | MTEB MTOPDomainClassification (de) | de | test | d80d48c1eb48d3562165c59d59d0034df9fff0bf | 67.63313609467455 | | 65.31424834681424 | | | Classification | mteb/mtop_domain | MTEB MTOPDomainClassification (es) | es | test | d80d48c1eb48d3562165c59d59d0034df9fff0bf | 72.03468979319545 | | 70.33858350063844 | | | Classification | mteb/mtop_domain | MTEB MTOPDomainClassification (fr) | fr | test | d80d48c1eb48d3562165c59d59d0034df9fff0bf | 69.33604760413404 | | 67.2763398514464 | | | Classification | mteb/mtop_domain | MTEB MTOPDomainClassification (hi) | hi | test | d80d48c1eb48d3562165c59d59d0034df9fff0bf | 19.336679813553243 | | 17.640206592911305 | | | Classification | mteb/mtop_domain | MTEB MTOPDomainClassification (th) | th | test | d80d48c1eb48d3562165c59d59d0034df9fff0bf | 14.958408679927668 | | 12.200892995648038 | | | Classification | mteb/mtop_intent | MTEB MTOPIntentClassification (en) | en | test | ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba | 53.504331965344285 | | 37.650916452762054 | | | Classification | mteb/mtop_intent | MTEB MTOPIntentClassification (de) | de | test | ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba | 52.8007889546351 | | 35.18483837593346 | | | Classification | mteb/mtop_intent | MTEB MTOPIntentClassification (es) | es | test | ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba | 53.268845897264846 | | 37.54041476398511 | | | Classification | mteb/mtop_intent | MTEB MTOPIntentClassification (fr) | fr | test | ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba | 47.59160663952396 | | 33.779636915265606 | | | Classification | mteb/mtop_intent | MTEB MTOPIntentClassification (hi) | hi | test | ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba | 4.180709931875224 | | 2.240473672484894 | | | Classification | mteb/mtop_intent | MTEB MTOPIntentClassification (th) | th | test | ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba | 4.1482820976491865 | | 2.2953415174353546 | | | Classification | mteb/amazon_massive_intent | MTEB MassiveIntentClassification (af) | af | test | 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 | 43.843308675184936 | | 42.83274171307546 | | | Classification | mteb/amazon_massive_intent | MTEB MassiveIntentClassification (am) | am | test | 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 | 8.459986550100874 | | 8.56499841559428 | | | Classification | mteb/amazon_massive_intent | MTEB MassiveIntentClassification (ar) | ar | test | 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 | 24.37457969065232 | | 23.648464353469087 | | | Classification | mteb/amazon_massive_intent | MTEB MassiveIntentClassification (az) | az | test | 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 | 43.61129791526564 | | 43.02872726206446 | | | Classification | mteb/amazon_massive_intent | MTEB MassiveIntentClassification (bn) | bn | test | 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 | 3.127101546738399 | | 1.7632874555194573 | | | Classification | mteb/amazon_massive_intent | MTEB MassiveIntentClassification (cy) | cy | test | 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 | 39.882313382649635 | | 39.09054995553107 | | | Classification | mteb/amazon_massive_intent | MTEB MassiveIntentClassification (da) | da | test | 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 | 49.05514458641561 | | 47.97657474719148 | | | Classification | mteb/amazon_massive_intent | MTEB MassiveIntentClassification (de) | de | test | 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 | 47.723604572965705 | | 46.266605736862424 | | | Classification | mteb/amazon_massive_intent | MTEB MassiveIntentClassification (el) | el | test | 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 | 49.2871553463349 | | 49.110660419740945 | | | Classification | mteb/amazon_massive_intent | MTEB MassiveIntentClassification (en) | en | test | 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 | 54.80833893745797 | | 53.43307984316261 | | | Classification | mteb/amazon_massive_intent | MTEB MassiveIntentClassification (es) | es | test | 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 | 48.73234700739745 | | 48.290537885757345 | | | Classification | mteb/amazon_massive_intent | MTEB MassiveIntentClassification (fa) | fa | test | 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 | 34.60322797579018 | | 33.21866171174647 | | | Classification | mteb/amazon_massive_intent | MTEB MassiveIntentClassification (fi) | fi | test | 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 | 47.09818426361803 | | 46.24034140543536 | | | Classification | mteb/amazon_massive_intent | MTEB MassiveIntentClassification (fr) | fr | test | 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 | 47.92871553463349 | | 47.2879827826325 | | | Classification | mteb/amazon_massive_intent | MTEB MassiveIntentClassification (he) | he | test | 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 | 23.429724277067923 | | 22.973698726459283 | | | Classification | mteb/amazon_massive_intent | MTEB MassiveIntentClassification (hi) | hi | test | 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 | 3.1909885675857437 | | 2.343483452751791 | | | Classification | mteb/amazon_massive_intent | MTEB MassiveIntentClassification (hu) | hu | test | 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 | 46.529926025554815 | | 45.585210075220026 | | | Classification | mteb/amazon_massive_intent | MTEB MassiveIntentClassification (hy) | hy | test | 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 | 33.00605245460659 | | 32.53906554922222 | | | Classification | mteb/amazon_massive_intent | MTEB MassiveIntentClassification (id) | id | test | 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 | 44.70073974445191 | | 44.63772874280639 | | | Classification | mteb/amazon_massive_intent | MTEB MassiveIntentClassification (is) | is | test | 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 | 42.56556825823806 | | 42.09519069412614 | | | Classification | mteb/amazon_massive_intent | MTEB MassiveIntentClassification (it) | it | test | 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 | 49.45191661062542 | | 49.73648735452711 | | | Classification | mteb/amazon_massive_intent | MTEB MassiveIntentClassification (ja) | ja | test | 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 | 36.03227975790181 | | 34.81337003018146 | | | Classification | mteb/amazon_massive_intent | MTEB MassiveIntentClassification (jv) | jv | test | 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 | 39.85205110961668 | | 39.16645932365053 | | | Classification | mteb/amazon_massive_intent | MTEB MassiveIntentClassification (ka) | ka | test | 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 | 29.84532616005381 | | 30.048107009813975 | | | Classification | mteb/amazon_massive_intent | MTEB MassiveIntentClassification (km) | km | test | 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 | 5.4942837928715536 | | 3.9402294020821236 | | | Classification | mteb/amazon_massive_intent | MTEB MassiveIntentClassification (kn) | kn | test | 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 | 3.5541358439811694 | | 2.3408708229868385 | | | Classification | mteb/amazon_massive_intent | MTEB MassiveIntentClassification (ko) | ko | test | 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 | 31.055817081371888 | | 30.54791134524761 | | | Classification | mteb/amazon_massive_intent | MTEB MassiveIntentClassification (lv) | lv | test | 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 | 48.44989912575656 | | 47.46077758238515 | | | Classification | mteb/amazon_massive_intent | MTEB MassiveIntentClassification (ml) | ml | test | 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 | 2.89172831203766 | | 1.1484871860887453 | | | Classification | mteb/amazon_massive_intent | MTEB MassiveIntentClassification (mn) | mn | test | 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 | 38.924008069939475 | | 38.953938082398274 | | | Classification | mteb/amazon_massive_intent | MTEB MassiveIntentClassification (ms) | ms | test | 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 | 43.25151311365165 | | 42.31124560201582 | | | Classification | mteb/amazon_massive_intent | MTEB MassiveIntentClassification (my) | my | test | 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 | 3.5137861466039007 | | 1.7087643302156377 | | | Classification | mteb/amazon_massive_intent | MTEB MassiveIntentClassification (nb) | nb | test | 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 | 45.34633490248823 | | 44.7188441016561 | | | Classification | mteb/amazon_massive_intent | MTEB MassiveIntentClassification (nl) | nl | test | 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 | 47.25285810356422 | | 45.442034061197944 | | Author : [email protected] Contributor(s): - Dominik Skotarczak (benchmark)
mradermacher/Progenitor-V5-LLaMa-70B-i1-GGUF
mradermacher
"2025-03-31T05:53:49Z"
664
1
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:TareksGraveyard/Progenitor-V5-LLaMa-70B", "base_model:quantized:TareksGraveyard/Progenitor-V5-LLaMa-70B", "license:llama3.3", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
"2025-02-24T07:03:31Z"
--- base_model: TareksGraveyard/Progenitor-V5-LLaMa-70B language: - en library_name: transformers license: llama3.3 quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/TareksGraveyard/Progenitor-V5-LLaMa-70B <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Progenitor-V5-LLaMa-70B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Progenitor-V5-LLaMa-70B-i1-GGUF/resolve/main/Progenitor-V5-LLaMa-70B.i1-IQ1_S.gguf) | i1-IQ1_S | 15.4 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Progenitor-V5-LLaMa-70B-i1-GGUF/resolve/main/Progenitor-V5-LLaMa-70B.i1-IQ1_M.gguf) | i1-IQ1_M | 16.9 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Progenitor-V5-LLaMa-70B-i1-GGUF/resolve/main/Progenitor-V5-LLaMa-70B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 19.2 | | | [GGUF](https://huggingface.co/mradermacher/Progenitor-V5-LLaMa-70B-i1-GGUF/resolve/main/Progenitor-V5-LLaMa-70B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 21.2 | | | [GGUF](https://huggingface.co/mradermacher/Progenitor-V5-LLaMa-70B-i1-GGUF/resolve/main/Progenitor-V5-LLaMa-70B.i1-IQ2_S.gguf) | i1-IQ2_S | 22.3 | | | [GGUF](https://huggingface.co/mradermacher/Progenitor-V5-LLaMa-70B-i1-GGUF/resolve/main/Progenitor-V5-LLaMa-70B.i1-IQ2_M.gguf) | i1-IQ2_M | 24.2 | | | [GGUF](https://huggingface.co/mradermacher/Progenitor-V5-LLaMa-70B-i1-GGUF/resolve/main/Progenitor-V5-LLaMa-70B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 24.6 | very low quality | | [GGUF](https://huggingface.co/mradermacher/Progenitor-V5-LLaMa-70B-i1-GGUF/resolve/main/Progenitor-V5-LLaMa-70B.i1-Q2_K.gguf) | i1-Q2_K | 26.5 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Progenitor-V5-LLaMa-70B-i1-GGUF/resolve/main/Progenitor-V5-LLaMa-70B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Progenitor-V5-LLaMa-70B-i1-GGUF/resolve/main/Progenitor-V5-LLaMa-70B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 29.4 | | | [GGUF](https://huggingface.co/mradermacher/Progenitor-V5-LLaMa-70B-i1-GGUF/resolve/main/Progenitor-V5-LLaMa-70B.i1-IQ3_S.gguf) | i1-IQ3_S | 31.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Progenitor-V5-LLaMa-70B-i1-GGUF/resolve/main/Progenitor-V5-LLaMa-70B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 31.0 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Progenitor-V5-LLaMa-70B-i1-GGUF/resolve/main/Progenitor-V5-LLaMa-70B.i1-IQ3_M.gguf) | i1-IQ3_M | 32.0 | | | [GGUF](https://huggingface.co/mradermacher/Progenitor-V5-LLaMa-70B-i1-GGUF/resolve/main/Progenitor-V5-LLaMa-70B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 34.4 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Progenitor-V5-LLaMa-70B-i1-GGUF/resolve/main/Progenitor-V5-LLaMa-70B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 37.2 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Progenitor-V5-LLaMa-70B-i1-GGUF/resolve/main/Progenitor-V5-LLaMa-70B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 38.0 | | | [GGUF](https://huggingface.co/mradermacher/Progenitor-V5-LLaMa-70B-i1-GGUF/resolve/main/Progenitor-V5-LLaMa-70B.i1-Q4_0.gguf) | i1-Q4_0 | 40.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Progenitor-V5-LLaMa-70B-i1-GGUF/resolve/main/Progenitor-V5-LLaMa-70B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 40.4 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Progenitor-V5-LLaMa-70B-i1-GGUF/resolve/main/Progenitor-V5-LLaMa-70B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 42.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Progenitor-V5-LLaMa-70B-i1-GGUF/resolve/main/Progenitor-V5-LLaMa-70B.i1-Q4_1.gguf) | i1-Q4_1 | 44.4 | | | [GGUF](https://huggingface.co/mradermacher/Progenitor-V5-LLaMa-70B-i1-GGUF/resolve/main/Progenitor-V5-LLaMa-70B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 48.8 | | | [GGUF](https://huggingface.co/mradermacher/Progenitor-V5-LLaMa-70B-i1-GGUF/resolve/main/Progenitor-V5-LLaMa-70B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 50.0 | | | [PART 1](https://huggingface.co/mradermacher/Progenitor-V5-LLaMa-70B-i1-GGUF/resolve/main/Progenitor-V5-LLaMa-70B.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Progenitor-V5-LLaMa-70B-i1-GGUF/resolve/main/Progenitor-V5-LLaMa-70B.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 58.0 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
nasrullkun88/84d9c7d6-654e-47ee-a640-a1c893ccc7d2
nasrullkun88
"2025-04-07T09:49:53Z"
0
0
null
[ "region:us" ]
null
"2025-04-07T09:21:54Z"
<!DOCTYPE html> <html class="" lang="en"> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no" /> <meta name="description" content="We're on a journey to advance and democratize artificial intelligence through open source and open science." /> <meta property="fb:app_id" content="1321688464574422" /> <meta name="twitter:card" content="summary_large_image" /> <meta name="twitter:site" content="@huggingface" /> <meta property="og:title" content="Hugging Face - The AI community building the future." /> <meta property="og:type" content="website" /> <title>Hugging Face - The AI community building the future.</title> <style> body { margin: 0; } main { background-color: white; min-height: 100vh; padding: 7rem 1rem 8rem 1rem; text-align: center; font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system, BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans, sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol, Noto Color Emoji; } img { width: 6rem; height: 6rem; margin: 0 auto 1rem; } h1 { font-size: 3.75rem; line-height: 1; color: rgba(31, 41, 55, 1); font-weight: 700; box-sizing: border-box; margin: 0 auto; } p, a { color: rgba(107, 114, 128, 1); font-size: 1.125rem; line-height: 1.75rem; max-width: 28rem; box-sizing: border-box; margin: 0 auto; } .dark main { background-color: rgb(11, 15, 25); } .dark h1 { color: rgb(209, 213, 219); } .dark p, .dark a { color: rgb(156, 163, 175); } </style> <script> // On page load or when changing themes, best to add inline in `head` to avoid FOUC const key = "_tb_global_settings"; let theme = window.matchMedia("(prefers-color-scheme: dark)").matches ? "dark" : "light"; try { const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme; if (storageTheme) { theme = storageTheme === "dark" ? "dark" : "light"; } } catch (e) {} if (theme === "dark") { document.documentElement.classList.add("dark"); } else { document.documentElement.classList.remove("dark"); } </script> </head> <body> <main> <img src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg" alt="" /> <div> <h1>429</h1> <p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p> </div> </main> </body> </html>
d2o2ji/videomae-base-finetuned-kinetics-allkisa-crop-background-0307-clip_duration-abnormal38
d2o2ji
"2025-03-09T23:41:55Z"
0
0
transformers
[ "transformers", "safetensors", "videomae", "video-classification", "generated_from_trainer", "base_model:MCG-NJU/videomae-base-finetuned-kinetics", "base_model:finetune:MCG-NJU/videomae-base-finetuned-kinetics", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
video-classification
"2025-03-07T07:05:46Z"
--- library_name: transformers license: cc-by-nc-4.0 base_model: MCG-NJU/videomae-base-finetuned-kinetics tags: - generated_from_trainer metrics: - accuracy model-index: - name: videomae-base-finetuned-kinetics-allkisa-crop-background-0307-clip_duration-abnormal38 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # videomae-base-finetuned-kinetics-allkisa-crop-background-0307-clip_duration-abnormal38 This model is a fine-tuned version of [MCG-NJU/videomae-base-finetuned-kinetics](https://huggingface.co/MCG-NJU/videomae-base-finetuned-kinetics) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1985 - Accuracy: 0.9329 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.05 - training_steps: 29400 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 0.0024 | 0.0100 | 295 | 0.2932 | 0.8709 | | 0.0011 | 1.0100 | 590 | 0.6339 | 0.8045 | | 0.0028 | 2.0100 | 885 | 0.7747 | 0.7888 | | 0.0029 | 3.0100 | 1180 | 0.5034 | 0.8551 | | 1.0298 | 4.0100 | 1475 | 0.5480 | 0.8150 | | 0.0248 | 5.0100 | 1770 | 0.3638 | 0.8726 | | 0.0435 | 6.0100 | 2065 | 0.8338 | 0.7976 | | 0.0001 | 7.0100 | 2360 | 0.8290 | 0.7749 | | 0.0004 | 8.0100 | 2655 | 0.5562 | 0.8464 | | 0.0001 | 9.0100 | 2950 | 1.0491 | 0.8080 | | 0.0002 | 10.0100 | 3245 | 0.7868 | 0.8290 | | 0.0197 | 11.0100 | 3540 | 0.6573 | 0.8586 | | 0.0002 | 12.0100 | 3835 | 0.8348 | 0.8325 | | 0.0001 | 13.0100 | 4130 | 0.6864 | 0.8464 | | 0.0001 | 14.0100 | 4425 | 0.6752 | 0.8517 | | 0.0 | 15.0100 | 4720 | 0.8101 | 0.8464 | ### Framework versions - Transformers 4.48.1 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
mradermacher/RampedUp-Mistral-7B-v0.5-GGUF
mradermacher
"2025-01-13T07:02:21Z"
195
0
transformers
[ "transformers", "gguf", "en", "base_model:selvamathan-rampedup/RampedUp-Mistral-7B-v0.5", "base_model:quantized:selvamathan-rampedup/RampedUp-Mistral-7B-v0.5", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-01-13T06:50:56Z"
--- base_model: selvamathan-rampedup/RampedUp-Mistral-7B-v0.5 language: - en library_name: transformers quantized_by: mradermacher tags: [] --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> static quants of https://huggingface.co/selvamathan-rampedup/RampedUp-Mistral-7B-v0.5 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/RampedUp-Mistral-7B-v0.5-GGUF/resolve/main/RampedUp-Mistral-7B-v0.5.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/RampedUp-Mistral-7B-v0.5-GGUF/resolve/main/RampedUp-Mistral-7B-v0.5.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/RampedUp-Mistral-7B-v0.5-GGUF/resolve/main/RampedUp-Mistral-7B-v0.5.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/RampedUp-Mistral-7B-v0.5-GGUF/resolve/main/RampedUp-Mistral-7B-v0.5.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/RampedUp-Mistral-7B-v0.5-GGUF/resolve/main/RampedUp-Mistral-7B-v0.5.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/RampedUp-Mistral-7B-v0.5-GGUF/resolve/main/RampedUp-Mistral-7B-v0.5.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/RampedUp-Mistral-7B-v0.5-GGUF/resolve/main/RampedUp-Mistral-7B-v0.5.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/RampedUp-Mistral-7B-v0.5-GGUF/resolve/main/RampedUp-Mistral-7B-v0.5.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/RampedUp-Mistral-7B-v0.5-GGUF/resolve/main/RampedUp-Mistral-7B-v0.5.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/RampedUp-Mistral-7B-v0.5-GGUF/resolve/main/RampedUp-Mistral-7B-v0.5.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/RampedUp-Mistral-7B-v0.5-GGUF/resolve/main/RampedUp-Mistral-7B-v0.5.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/RampedUp-Mistral-7B-v0.5-GGUF/resolve/main/RampedUp-Mistral-7B-v0.5.f16.gguf) | f16 | 14.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
kambehmw/Pyramids1
kambehmw
"2023-04-02T14:40:17Z"
1
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
"2023-04-02T14:40:08Z"
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids 2. Step 1: Find your model_id: kambehmw/Pyramids1 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
great0001/682c31ff-6056-470a-9493-75e31ddf64c8
great0001
"2025-01-22T02:31:58Z"
8
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:NousResearch/Llama-3.2-1B", "base_model:adapter:NousResearch/Llama-3.2-1B", "license:llama3.2", "region:us" ]
null
"2025-01-22T02:31:24Z"
--- library_name: peft license: llama3.2 base_model: NousResearch/Llama-3.2-1B tags: - axolotl - generated_from_trainer model-index: - name: 682c31ff-6056-470a-9493-75e31ddf64c8 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: NousResearch/Llama-3.2-1B bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - f125140bbd8d00bb_train_data.json ds_type: json format: custom path: /workspace/input_data/f125140bbd8d00bb_train_data.json type: field_instruction: prompt field_output: chosen format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: great0001/682c31ff-6056-470a-9493-75e31ddf64c8 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 10 micro_batch_size: 2 mlflow_experiment_name: /tmp/f125140bbd8d00bb_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 special_tokens: pad_token: <|end_of_text|> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 6fae5722-a687-4f33-91b5-eed8fce0140c wandb_project: Birthday-SN56-14-Gradients-On-Demand wandb_run: your_name wandb_runid: 6fae5722-a687-4f33-91b5-eed8fce0140c warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 682c31ff-6056-470a-9493-75e31ddf64c8 This model is a fine-tuned version of [NousResearch/Llama-3.2-1B](https://huggingface.co/NousResearch/Llama-3.2-1B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.4017 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.1415 | 0.0018 | 1 | 2.1399 | | 1.8467 | 0.0053 | 3 | 2.1261 | | 2.04 | 0.0106 | 6 | 1.9215 | | 1.4872 | 0.0160 | 9 | 1.4017 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
TCLResearchEurope/resnet50d_pruned_37
TCLResearchEurope
"2023-12-07T16:38:35Z"
0
0
null
[ "pytorch", "torch-dag", "image-classification", "dataset:imagenet-1k", "license:apache-2.0", "region:us" ]
image-classification
"2023-12-07T16:37:40Z"
--- license: apache-2.0 datasets: - imagenet-1k metrics: - accuracy pipeline_tag: image-classification tags: - pytorch - torch-dag --- # Model Card for resnet50d_pruned_37 This is a prunned version of the [timm/resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k) model in a [toch-dag](https://github.com/TCLResearchEurope/torch-dag) format. This model has rougly 37% of the original model FLOPs with minimal metrics drop. | Model | KMAPPs* | M Parameters | Accuracy (224x224) | | ----------- | ----------- | ----------- | ------------------ | | **timm/resnet50d.a3_in1 (baseline)** | 174 | 25.6 | 80.9% | | **resnet50d_pruned_37 (ours)** | 63.7 **(37%)** | 10.9 **(43%)** | 78.51% **(↓ 2.39%)** | \***KMAPPs** thousands of FLOPs per input pixel `KMAPPs(model) = FLOPs(model) / (H * W * 1000)`, where `(H, W)` is the input resolution. The accuracy was calculated on the ImageNet-1k validation dataset. For details about image pre-processing, please refer to the original repository. ## Model Details ### Model Description - **Developed by:** [TCL Research Europe](https://github.com/TCLResearchEurope/) - **Model type:** Classification / feature backbone - **License:** Apache 2.0 - **Finetuned from model:** [timm/resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k) ### Model Sources - **Repository:** [timm/resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k) ## How to Get Started with the Model To load the model, You have to install [torch-dag](https://github.com/TCLResearchEurope/torch-dag#3-installation) library, which can be done using `pip` by ``` pip install torch-dag ``` then, clone this repository ``` # Make sure you have git-lfs installed (https://git-lfs.com) git lfs install git clone https://huggingface.co/TCLResearchEurope/resnet50d_pruned_37 ``` and now You are ready to load the model: ``` import torch_dag import torch model = torch_dag.io.load_dag_from_path('./resnet50d_pruned_37') model.eval() out = model(torch.ones(1, 3, 224, 224)) print(out.shape) ```
NimaBoscarino/ventricular_short_axis_3label
NimaBoscarino
"2022-07-27T06:27:38Z"
0
0
null
[ "MONAI", "region:us" ]
null
"2022-07-27T06:26:22Z"
--- tags: - MONAI --- # 3 Label Ventricular Segmentation This network segments cardiac ventricle in 2D short axis MR images. The left ventricular pool is class 1, left ventricular myocardium class 2, and right ventricular pool class 3. Full cycle segmentation with this network is possible although much of the training data is composed of segmented end-diastole images. The input to the network is single 2D images thus segmenting whole time-dependent volumes consists of multiple inference operations. The network and training scheme are essentially identical to that described in: `Kerfoot E., Clough J., Oksuz I., Lee J., King A.P., Schnabel J.A. (2019) Left-Ventricle Quantification Using Residual U-Net. In: Pop M. et al. (eds) Statistical Atlases and Computational Models of the Heart. Atrial Segmentation and LV Quantification Challenges. STACOM 2018. Lecture Notes in Computer Science, vol 11395. Springer, Cham. https://doi.org/10.1007/978-3-030-12029-0_40` ## Data The dataset used to train this network unfortunately cannot be made public as it contains unreleased image data from King's College London. Existing public datasets such as the[Sunnybrook Cardiac Dataset](http://www.cardiacatlas.org/studies/sunnybrook-cardiac-data/) and [ACDC Challenge](https://www.creatis.insa-lyon.fr/Challenge/acdc/) set can be used to train a similar network. The `train.json` configuration assumes all data is stored in a single npz file with keys "images" and "segs" containing respectively the raw image data and their accompanying segmentations. The given network was training with stored volumes with shapes `(9095, 256, 256)` thus other data of differing spatial dimensions must be cropped to `(256, 256)` or zero-padded to that size. For the training data this was done as a preprocessing step but the original pixel values are otherwise unchanged from their original forms. ## Training The network is trained with this data in conjunction with a series of augmentations for regularisation and robustness. Many of the original images are smaller than the expected size of `(256, 256)` and so were zero-padded, the network can thus be expected to be robust against large amounts of empty space in the inputs. Rotation and zooming is also applied to force the network to learn different sizes and orientations of the heart in the field of view. Free-form deformation is applied to vary the shape of the heart and its surrounding tissues which mimics to a degree deformation like what would be observed through the cardiac cycle. This of course does not replicate the heart moving through plane during the cycle or represent other observed changes but does provide enough variation that full-cycle segmentation is generally acceptable. Smooth fields are used to vary contrast and intensity in localised regions to simulate some of the variation in image quality caused by acquisition artefacts. Guassian noise is also added to simulate poor quality acquisition. These together force the network to learn to deal with a wider variation of image quality and partially to account for the difference between scanner vendors. Training is invoked with the following command line: ```sh python -m monai.bundle run training --meta_file configs/metadata.json --config_file configs/train.json --logging_file configs/logging.conf --bundle_root . ``` The dataset file is assumed to be `allimages3label.npz` but can be changed by setting the `dataset_file` value to your own file. ## Inference An example notebook [visualise.ipynb](./visualise.ipynb) demonstrates using the network directly with input images. Inference of 3D volumes only can be accomplished with the `inference.json` configuration: ```sh python -m monai.bundle run evaluating --meta_file configs/metadata.json --config_file configs/inference.json --logging_file configs/logging.conf --dataset_dir dataset --output_dir ./output/ --bundle_root . ```
jienengchen/ViTamin-B
jienengchen
"2024-04-08T05:44:22Z"
102
0
transformers
[ "transformers", "pytorch", "feature-extraction", "custom_code", "dataset:mlfoundations/datacomp_1b", "arxiv:2404.02132", "license:mit", "region:us" ]
feature-extraction
"2024-04-08T04:25:05Z"
--- license: mit datasets: - mlfoundations/datacomp_1b pipeline_tag: feature-extraction --- # Model card for ViTamin-B Official huggingface models of **ViTamin**, from the following CVPR 2024 paper: [ViTamin: Design Scalable Vision Models in the Vision-language Era](https://arxiv.org/pdf/2404.02132.pdf).\ ✨ &ensp;[Jieneng Chen](https://beckschen.github.io), [Qihang Yu](https://yucornetto.github.io/), [Xiaohui Shen](https://xiaohuishen.github.io/), [Alan Yuille](https://www.cs.jhu.edu/~ayuille/) and [Liang-Chieh Chen](http://liangchiehchen.com/)\ 🏠 &ensp;Johns Hopkins University, Bytedance Load from HuggingFace with transformers.AutoModel: ```python import torch import open_clip from PIL import Image from transformers import AutoModel, CLIPImageProcessor device = "cuda" if torch.cuda.is_available() else "cpu" # obtained 69.1% zero-shot ImageNet score model = AutoModel.from_pretrained( 'jienengchen/ViTamin-B', trust_remote_code=True).to(device).eval() image = Image.open('./image.png').convert('RGB') image_processor = CLIPImageProcessor.from_pretrained('jienengchen/ViTamin-B') pixel_values = image_processor(images=image, return_tensors='pt').pixel_values pixel_values = pixel_values.to(torch.bfloat16).cuda() tokenizer = open_clip.get_tokenizer('hf-hub:laion/CLIP-ViT-L-14-DataComp.XL-s13B-b90K') text = tokenizer(["a photo of vitamin", "a dog", "a cat"]).to(device) with torch.no_grad(), torch.cuda.amp.autocast(): image_features, text_features, logit_scale = model(pixel_values, text) text_probs = (100.0 * image_features @ text_features.to(torch.float).T).softmax(dim=-1) print("Label probs:", text_probs) ``` ## Main Results with CLIP Pre-training on DataComp-1B | image encoder | image size | num patches | text encoder depth/width | seen samples (B) | trainable params Image+Text (M) | MACs Image+Text (G) | ImageNet Acc. | avg. 38 datasets | ImageNet dist. shift. | VTAB | retrieval | |---------------|------------|-------------|--------------------------|-------------------|---------------------------------|----------------------|---------------|------------------|-----------------------|------|-----------| | ViTamin-S | 224 | 196 | 12/384 | 1.28 | 22.0+40.4 | 5.50+1.64 | 62.2 | 53.2 | 51.3 | 51.7 | 50.0 | | ViTamin-S-LTT | 224 | 196 | 12/384 | 1.28 | 22.0+40.4 | 5.50+1.64 | 63.4 | 54.6 | 51.6 | 54.9 | 52.9 | | ViTamin-B | 224 | 196 | 12/512 | 1.28 | 87.5+63.4 | 21.8+2.9 | 68.9 | 57.7 | 58.3 | 56.4 | 54.1 | | ViTamin-B-LTT | 224 | 196 | 12/512 | 1.28 | 87.5+63.4 | 21.8+2.9 | 70.8 | 59.4 | 59.3 | 56.6 | 59.4 | | ViTamin-L | 224 | 196 | 12/768 | 12.8 | 333.3+123.7 | 72.6+6.6 | 80.8 | 66.7 | 69.8 | 65.3 | 60.3 | | ViTamin-L | 256 | 256 | 12/768 | 12.8+0.2 | 333.4+123.7 | 94.8+6.6 | 81.2 | 67.0 | 71.1 | 65.3 | 61.2 | | ViTamin-L | 336 | 441 | 12/768 | 12.8+0.2 | 333.6+123.7 | 163.4+6.6 | 81.6 | 67.0 | 72.1 | 64.4 | 61.6 | | ViTamin-L | 384 | 576 | 12/768 | 12.8+0.2 | 333.7+123.7 | 213.4+6.6 | 81.8 | 67.2 | 72.4 | 64.7 | 61.8 | | ViTamin-L2 | 224 | 196 | 24/1024 | 12.8 | 333.6+354.0 | 72.6+23.3 | 80.9 | 66.4 | 70.6 | 63.4 | 61.5 | | ViTamin-L2 | 256 | 256 | 24/1024 | 12.8+0.5 | 333.6+354.0 | 94.8+23.3 | 81.5 | 67.4 | 71.9 | 64.1 | 63.1 | | ViTamin-L2 | 336 | 441 | 24/1024 | 12.8+0.5 | 333.8+354.0 | 163.4+23.3 | 81.8 | 67.8 | 73.0 | 64.5 | 63.6 | | ViTamin-L2 | 384 | 576 | 24/1024 | 12.8+0.5 | 334.0+354.0 | 213.4+23.3 | 82.1 | 68.1 | 73.4 | 64.8 | 63.7 | | ViTamin-XL | 256 | 256 | 27/1152 | 12.8+0.5 | 436.1+488.7 | 125.3+33.1 | 82.1 | 67.6 | 72.3 | 65.4 | 62.7 | | ViTamin-XL | 384 | 576 | 27/1152 | 12.8+0.5 | 436.1+488.7 | 281.9+33.1 | 82.6 | 68.1 | 73.6 | 65.6 | 63.8 | | ViTamin-XL | 256 | 256 | 27/1152 | 40 | 436.1+488.7 | 125.3+33.1 | 82.3 | 67.5 | 72.8 | 64.0 | 62.1 | | ViTamin-XL | 336 | 441 | 27/1152 | 40+1 | 436.1+488.7 | 215.9+33.1 | 82.7 | 68.0 | 73.9 | 64.1 | 62.6 | | ViTamin-XL | 384 | 576 | 27/1152 | 40+1 | 436.1+488.7 | 281.9+33.1 | 82.9 | 68.1 | 74.1 | 64.0 | 62.5 | ## Main Results on Downstream tasks **Open-Vocab Detection** | image encoder | detector | OV-COCO (AP<sub>50</sub><sup>novel</sup>) | OV-LVIS (AP<sub>r</sub>) | |---------------|----------|---------------------------------------|-----------------------| | ViT-L/14 | Sliding F-ViT | 36.1 | 32.5 | | ViTamin-L | Sliding F-ViT | 37.5 | 35.6 | **Open-Vocab Segmentation** | image encoder | segmentor | ADE | Cityscapes | MV | A-150 | A-847 | PC-459 | PC-59 | PAS-21 | |---------------|-------------|----------------|--------------|------|-------|-------|--------|-------|--------------------| | ViT-L/14 | Sliding FC-CLIP | 24.6 | 40.7 | 16.5 | 31.8 | 14.3 | 18.3 | 55.1 | 81.5 | | ViTamin-L | Sliding FC-CLIP | 27.3 | 44.0 | 18.2 | 35.6 | 16.1 | 20.4 | 58.4 | 83.4 | Note: Panoptic dataset (ADE, CityScapes, MV) are with the metric of PQ. Semantic dataset (A-150, A-847, PC-459, PC-59, PAS-21) are with the metric of mIoU. **Large Multi-modal Models** | image encoder | image size | VQAv2 | GQA | VizWiz | SQA | T-VQA | POPE | MME | MM-Bench | MM-B-CN | SEED | LLaVA-Wild | MM-Vet | |---------------|----------|-------|------|--------|------|-------|------|------|----------|---------|------|------------|--------| | ViTamin-L | 336 | 78.4 | 61.6 | 51.1 | 66.9 | 58.7 | 84.6 | 1421 | 65.4 | 58.4 | 57.7 | 64.5 | 33.6 | | ViTamin-L | 384 | 78.9 | 61.6 | 55.4 | 67.6 | 59.8 | 85.5 | 1447 | 64.5 | 58.3 | 57.9 | 66.1 | 33.6 | ## Citing ViTamin ``` @inproceedings{chen2024vitamin, title={ViTamin: Design Scalable Vision Models in the Vision-language Era}, author={Chen, Jieneng and Yu, Qihang and Shen, Xiaohui and Yuille, ALan and Chen, Liang-Chieh}, booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition}, year={2024} } ```
bbytxt/eb0fe442-1efd-491d-aa2c-d0ac309c7b61
bbytxt
"2025-01-11T17:00:55Z"
7
0
peft
[ "peft", "safetensors", "gpt_neox", "axolotl", "generated_from_trainer", "base_model:databricks/dolly-v2-3b", "base_model:adapter:databricks/dolly-v2-3b", "license:mit", "region:us" ]
null
"2025-01-11T16:53:48Z"
--- library_name: peft license: mit base_model: databricks/dolly-v2-3b tags: - axolotl - generated_from_trainer model-index: - name: eb0fe442-1efd-491d-aa2c-d0ac309c7b61 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: databricks/dolly-v2-3b bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 0c87c12065f9eb25_train_data.json ds_type: json format: custom path: /workspace/input_data/0c87c12065f9eb25_train_data.json type: field_instruction: human field_output: chosen format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: null eval_batch_size: 2 eval_max_new_tokens: 128 eval_steps: null eval_table_size: null evals_per_epoch: 4 flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: true hub_model_id: bbytxt/eb0fe442-1efd-491d-aa2c-d0ac309c7b61 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 5 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/0c87c12065f9eb25_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: null saves_per_epoch: null sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 2784ff78-52a4-4959-b96b-47b4fb1ea4c6 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 2784ff78-52a4-4959-b96b-47b4fb1ea4c6 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # eb0fe442-1efd-491d-aa2c-d0ac309c7b61 This model is a fine-tuned version of [databricks/dolly-v2-3b](https://huggingface.co/databricks/dolly-v2-3b) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2296 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0004 | 1 | 2.2573 | | 2.9125 | 0.0222 | 50 | 1.3195 | | 2.7019 | 0.0444 | 100 | 1.2728 | | 2.7 | 0.0667 | 150 | 1.2384 | | 2.7471 | 0.0889 | 200 | 1.2296 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
TheBloke/Yi-34B-GiftedConvo-merged-GGUF
TheBloke
"2023-11-09T11:46:06Z"
97
8
transformers
[ "transformers", "gguf", "llama", "dataset:NobodyExistsOnTheInternet/GiftedConvoBeforeEcons", "base_model:NobodyExistsOnTheInternet/Yi-34B-GiftedConvo-merged", "base_model:quantized:NobodyExistsOnTheInternet/Yi-34B-GiftedConvo-merged", "license:mit", "region:us" ]
null
"2023-11-08T23:49:35Z"
--- base_model: NobodyExistsOnTheInternet/Yi-34B-GiftedConvo-merged datasets: - NobodyExistsOnTheInternet/GiftedConvoBeforeEcons inference: false license: mit model_creator: Nobody.png model_name: Yi 34B GiftedConvo Llama model_type: llama prompt_template: 'USER: {prompt} ASSISTANT: ' quantized_by: TheBloke --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Yi 34B GiftedConvo Llama - GGUF - Model creator: [Nobody.png](https://huggingface.co/NobodyExistsOnTheInternet) - Original model: [Yi 34B GiftedConvo Llama](https://huggingface.co/NobodyExistsOnTheInternet/Yi-34B-GiftedConvo-merged) <!-- description start --> ## Description This repo contains GGUF format model files for [Nobody.png's Yi 34B GiftedConvo Llama](https://huggingface.co/NobodyExistsOnTheInternet/Yi-34B-GiftedConvo-merged). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Yi-34B-GiftedConvo-merged-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Yi-34B-GiftedConvo-merged-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Yi-34B-GiftedConvo-merged-GGUF) * [Nobody.png's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/NobodyExistsOnTheInternet/Yi-34B-GiftedConvo-merged) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: User-Assistant ``` USER: {prompt} ASSISTANT: ``` <!-- prompt-template end --> <!-- licensing start --> ## Licensing The creator of the source model has listed its license as `mit`, and this quantization has therefore used that same license. As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly. In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Nobody.png's Yi 34B GiftedConvo Llama](https://huggingface.co/NobodyExistsOnTheInternet/Yi-34B-GiftedConvo-merged). <!-- licensing end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [yi-34b-giftedconvo-merged.Q2_K.gguf](https://huggingface.co/TheBloke/Yi-34B-GiftedConvo-merged-GGUF/blob/main/yi-34b-giftedconvo-merged.Q2_K.gguf) | Q2_K | 2 | 14.56 GB| 17.06 GB | smallest, significant quality loss - not recommended for most purposes | | [yi-34b-giftedconvo-merged.Q3_K_S.gguf](https://huggingface.co/TheBloke/Yi-34B-GiftedConvo-merged-GGUF/blob/main/yi-34b-giftedconvo-merged.Q3_K_S.gguf) | Q3_K_S | 3 | 14.96 GB| 17.46 GB | very small, high quality loss | | [yi-34b-giftedconvo-merged.Q3_K_M.gguf](https://huggingface.co/TheBloke/Yi-34B-GiftedConvo-merged-GGUF/blob/main/yi-34b-giftedconvo-merged.Q3_K_M.gguf) | Q3_K_M | 3 | 16.64 GB| 19.14 GB | very small, high quality loss | | [yi-34b-giftedconvo-merged.Q3_K_L.gguf](https://huggingface.co/TheBloke/Yi-34B-GiftedConvo-merged-GGUF/blob/main/yi-34b-giftedconvo-merged.Q3_K_L.gguf) | Q3_K_L | 3 | 18.14 GB| 20.64 GB | small, substantial quality loss | | [yi-34b-giftedconvo-merged.Q4_0.gguf](https://huggingface.co/TheBloke/Yi-34B-GiftedConvo-merged-GGUF/blob/main/yi-34b-giftedconvo-merged.Q4_0.gguf) | Q4_0 | 4 | 19.47 GB| 21.97 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [yi-34b-giftedconvo-merged.Q4_K_S.gguf](https://huggingface.co/TheBloke/Yi-34B-GiftedConvo-merged-GGUF/blob/main/yi-34b-giftedconvo-merged.Q4_K_S.gguf) | Q4_K_S | 4 | 19.54 GB| 22.04 GB | small, greater quality loss | | [yi-34b-giftedconvo-merged.Q4_K_M.gguf](https://huggingface.co/TheBloke/Yi-34B-GiftedConvo-merged-GGUF/blob/main/yi-34b-giftedconvo-merged.Q4_K_M.gguf) | Q4_K_M | 4 | 20.66 GB| 23.16 GB | medium, balanced quality - recommended | | [yi-34b-giftedconvo-merged.Q5_0.gguf](https://huggingface.co/TheBloke/Yi-34B-GiftedConvo-merged-GGUF/blob/main/yi-34b-giftedconvo-merged.Q5_0.gguf) | Q5_0 | 5 | 23.71 GB| 26.21 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [yi-34b-giftedconvo-merged.Q5_K_S.gguf](https://huggingface.co/TheBloke/Yi-34B-GiftedConvo-merged-GGUF/blob/main/yi-34b-giftedconvo-merged.Q5_K_S.gguf) | Q5_K_S | 5 | 23.71 GB| 26.21 GB | large, low quality loss - recommended | | [yi-34b-giftedconvo-merged.Q5_K_M.gguf](https://huggingface.co/TheBloke/Yi-34B-GiftedConvo-merged-GGUF/blob/main/yi-34b-giftedconvo-merged.Q5_K_M.gguf) | Q5_K_M | 5 | 24.32 GB| 26.82 GB | large, very low quality loss - recommended | | [yi-34b-giftedconvo-merged.Q6_K.gguf](https://huggingface.co/TheBloke/Yi-34B-GiftedConvo-merged-GGUF/blob/main/yi-34b-giftedconvo-merged.Q6_K.gguf) | Q6_K | 6 | 28.21 GB| 30.71 GB | very large, extremely low quality loss | | [yi-34b-giftedconvo-merged.Q8_0.gguf](https://huggingface.co/TheBloke/Yi-34B-GiftedConvo-merged-GGUF/blob/main/yi-34b-giftedconvo-merged.Q8_0.gguf) | Q8_0 | 8 | 36.54 GB| 39.04 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Yi-34B-GiftedConvo-merged-GGUF and below it, a specific filename to download, such as: yi-34b-giftedconvo-merged.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Yi-34B-GiftedConvo-merged-GGUF yi-34b-giftedconvo-merged.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Yi-34B-GiftedConvo-merged-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Yi-34B-GiftedConvo-merged-GGUF yi-34b-giftedconvo-merged.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m yi-34b-giftedconvo-merged.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "USER: {prompt}\nASSISTANT:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/Yi-34B-GiftedConvo-merged-GGUF", model_file="yi-34b-giftedconvo-merged.Q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Nobody.png's Yi 34B GiftedConvo Llama Trained on over 20k instruct generated all by gpt-4 or humans Dataset features: 1000 long evolved conversations based off LIMA Subsection of correct PRM800k data Subsection of CamelAI's Physics and Chemistry data The model is trained with Qlora as well as Axolotl. <!-- original-model-card end -->
KiViDrag/ViT_breastmnist_std_30
KiViDrag
"2024-11-09T15:35:41Z"
215
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:medmnist-v2", "base_model:google/vit-base-patch16-224", "base_model:finetune:google/vit-base-patch16-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2024-11-09T15:35:29Z"
--- library_name: transformers license: apache-2.0 base_model: google/vit-base-patch16-224 tags: - generated_from_trainer datasets: - medmnist-v2 metrics: - accuracy - f1 model-index: - name: ViT_breastmnist_std_30 results: - task: name: Image Classification type: image-classification dataset: name: medmnist-v2 type: medmnist-v2 config: breastmnist split: validation args: breastmnist metrics: - name: Accuracy type: accuracy value: 0.8269230769230769 - name: F1 type: f1 value: 0.7314974182444062 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ViT_breastmnist_std_30 This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the medmnist-v2 dataset. It achieves the following results on the evaluation set: - Loss: 0.3936 - Accuracy: 0.8269 - F1: 0.7315 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:------:|:----:|:---------------:|:--------:|:------:| | 0.5034 | 0.2597 | 20 | 0.4719 | 0.7436 | 0.4708 | | 0.4414 | 0.5195 | 40 | 0.4457 | 0.7821 | 0.6400 | | 0.3762 | 0.7792 | 60 | 0.4212 | 0.8205 | 0.7248 | | 0.4059 | 1.0390 | 80 | 0.3988 | 0.8462 | 0.7641 | | 0.3249 | 1.2987 | 100 | 0.3829 | 0.8333 | 0.7606 | | 0.2991 | 1.5584 | 120 | 0.4080 | 0.8462 | 0.7743 | | 0.2948 | 1.8182 | 140 | 0.3932 | 0.8462 | 0.7833 | | 0.2667 | 2.0779 | 160 | 0.4388 | 0.8333 | 0.7502 | | 0.2049 | 2.3377 | 180 | 0.4047 | 0.8333 | 0.7606 | | 0.1639 | 2.5974 | 200 | 0.4301 | 0.8333 | 0.7502 | | 0.1732 | 2.8571 | 220 | 0.4028 | 0.8333 | 0.7606 | | 0.1138 | 3.1169 | 240 | 0.3755 | 0.8718 | 0.8194 | | 0.1099 | 3.3766 | 260 | 0.4019 | 0.8590 | 0.7886 | | 0.1285 | 3.6364 | 280 | 0.3739 | 0.8590 | 0.7974 | | 0.1265 | 3.8961 | 300 | 0.3714 | 0.8590 | 0.8051 | | 0.0735 | 4.1558 | 320 | 0.3820 | 0.8718 | 0.8194 | | 0.0515 | 4.4156 | 340 | 0.3910 | 0.8462 | 0.7833 | | 0.0577 | 4.6753 | 360 | 0.3984 | 0.8462 | 0.7833 | | 0.0584 | 4.9351 | 380 | 0.4314 | 0.8590 | 0.7974 | | 0.0241 | 5.1948 | 400 | 0.4040 | 0.8718 | 0.8194 | | 0.015 | 5.4545 | 420 | 0.4201 | 0.8718 | 0.8194 | | 0.023 | 5.7143 | 440 | 0.4276 | 0.8718 | 0.8194 | | 0.0254 | 5.9740 | 460 | 0.4271 | 0.8846 | 0.8342 | | 0.0086 | 6.2338 | 480 | 0.4149 | 0.8718 | 0.8194 | | 0.012 | 6.4935 | 500 | 0.4738 | 0.8718 | 0.8120 | | 0.0052 | 6.7532 | 520 | 0.4314 | 0.8846 | 0.8342 | | 0.0123 | 7.0130 | 540 | 0.4363 | 0.8718 | 0.8194 | | 0.0026 | 7.2727 | 560 | 0.4477 | 0.8846 | 0.8342 | | 0.0018 | 7.5325 | 580 | 0.4447 | 0.8718 | 0.8194 | | 0.0024 | 7.7922 | 600 | 0.4588 | 0.8718 | 0.8194 | | 0.0076 | 8.0519 | 620 | 0.4517 | 0.8718 | 0.8194 | | 0.0013 | 8.3117 | 640 | 0.4535 | 0.8718 | 0.8194 | | 0.0012 | 8.5714 | 660 | 0.4479 | 0.8846 | 0.8342 | | 0.001 | 8.8312 | 680 | 0.4477 | 0.8846 | 0.8342 | | 0.0015 | 9.0909 | 700 | 0.4509 | 0.8846 | 0.8342 | | 0.001 | 9.3506 | 720 | 0.4529 | 0.8846 | 0.8342 | | 0.0009 | 9.6104 | 740 | 0.4569 | 0.8846 | 0.8342 | | 0.001 | 9.8701 | 760 | 0.4563 | 0.8846 | 0.8342 | ### Framework versions - Transformers 4.45.1 - Pytorch 2.4.0 - Datasets 3.0.1 - Tokenizers 0.20.0
m3rg-iitd/matscibert
m3rg-iitd
"2024-06-22T12:00:54Z"
3,374
17
transformers
[ "transformers", "pytorch", "safetensors", "bert", "fill-mask", "en", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2022-03-02T23:29:05Z"
--- license: mit language: - en metrics: - accuracy - f1 - precision - recall library_name: transformers --- # MatSciBERT ## A Materials Domain Language Model for Text Mining and Information Extraction This is the pretrained model presented in [MatSciBERT: A materials domain language model for text mining and information extraction](https://rdcu.be/cMAp5), which is a BERT model trained on material science research papers. The training corpus comprises papers related to the broad category of materials: alloys, glasses, metallic glasses, cement and concrete. We have utilised the abstracts and full text of papers(when available). All the research papers have been downloaded from [ScienceDirect](https://www.sciencedirect.com/) using the [Elsevier API](https://dev.elsevier.com/). The detailed methodology is given in the paper. The codes for pretraining and finetuning on downstream tasks are shared on [GitHub](https://github.com/m3rg-repo/MatSciBERT). If you find this useful in your research, please consider citing: ``` @article{gupta_matscibert_2022, title = "{MatSciBERT}: A Materials Domain Language Model for Text Mining and Information Extraction", author = "Gupta, Tanishq and Zaki, Mohd and Krishnan, N. M. Anoop and Mausam", year = "2022", month = may, journal = "npj Computational Materials", volume = "8", number = "1", pages = "102", issn = "2057-3960", url = "https://www.nature.com/articles/s41524-022-00784-w", doi = "10.1038/s41524-022-00784-w" } ```
Bunpot/llama3.2-3B-finetune-spa-onsen-v5
Bunpot
"2025-03-07T09:33:17Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2025-03-07T09:33:10Z"
--- base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Bunpot - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
DavidAU/Meta-Llama-3.1-Instruct-8.9B-BRAINSTORM-5x-FORM-11-GGUF
DavidAU
"2024-11-14T05:50:35Z"
55
3
null
[ "gguf", "creative", "story", "writing", "fiction", "roleplaying", "rp", "horror", "science fiction", "fiction writing", "scene generation", "scene continue", "brainstorm 5x", "enhanced", "llama3", "llama3.1", "text-generation", "en", "arxiv:2401.02415", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
"2024-07-31T07:04:54Z"
--- license: apache-2.0 language: - en tags: - creative - story - writing - fiction - roleplaying - rp - horror - science fiction - fiction writing - scene generation - scene continue - brainstorm 5x - enhanced - llama3 - llama3.1 pipeline_tag: text-generation --- <H3>BRAINSTORM 5x: Meta-Llama-3.1-8B Instruct (now at 8.9B), Formula #11 </H3> This repo contains quants 5x of Meta-Llama-3.1-8B Instruct (now at 8.9B) with max context of 131k using the "Brainstorm" method of augmenting reasoning in a LLM to increase it's performance at the core level for ANY creative use case(s). This specific version has calibrations that allow it to exceed the creative levels of the original model. Examples of output below. The BRAINSTORM process was developed by David_AU. Some of the core principals behind this process are discussed in this <a href="https://arxiv.org/pdf/2401.02415"> scientific paper : Progressive LLaMA with Block Expansion </a>. However I went in a completely different direction from what was outlined in this paper. <B>What is "Brainstorm" ?</b> The reasoning center of an LLM is taken apart, reassembled, and expanded. Then these centers are individually calibrated. These "centers" also interact with each other. This introduces subtle changes into the reasoning process. The calibrations further adjust - dial up or down - these "changes" further. The number of centers (5x,10x etc) allow more "tuning points" to further customize how the model reasons so to speak. The core aim of this process is to increase the model's detail, concept and connection to the "world", general concept connections, prose quality and prose length without affecting instruction following. This will also enhance any creative use case(s) of any kind, including "brainstorming", creative art form(s) and like case uses. Here are some of the enhancements this process brings to the model's performance: - Prose generation seems more focused on the moment to moment. - Sometimes there will be "preamble" and/or foreshadowing present. - Fewer or no "cliches" - Better overall prose and/or more complex / nuanced prose. - A greater sense of nuance on all levels. - Coherence is stronger. - Description is more detailed, and connected closer to the content. - Simile and Metaphors are stronger and better connected to the prose, story, and character. - Sense of "there" / in the moment is enhanced. - Details are more vivid, and there are more of them. - Prose generation length can be long to extreme. - Emotional engagement is stronger. - The model will take FEWER liberties vs a normal model: It will follow directives more closely but will "guess" less. - The MORE instructions and/or details you provide the more strongly the model will respond. - Depending on the model "voice" may be more "human" vs original model's "voice". Other "lab" observations: - This process does not, in my opinion, make the model 5x or 10x "smarter" - if only that was true! - However, a change in "IQ" was not an issue / a priority, and was not tested or calibrated for so to speak. - From lab testing it seems to ponder, and consider more carefully roughly speaking. - You could say this process sharpens the model's focus on it's task(s) at a deeper level. The process to modify the model occurs at the root level - source files level. The model can quanted as a GGUF, EXL2, AWQ etc etc. Other technologies developed by David_AU like "Ultra" (precision), "Neo Imatrix" (custom imatrix datasets), and "X-quants" (custom application of the imatrix process) can further enhance the performance of the model along with the "Brainstorm" process. The "Brainstorm" process has been tested on multiple LLama2, Llama3, and Mistral models of various parameter sizes, as well as on "root" models like "Llama3 Instruct", "Mistral Instruct", and "merged" / "fine tuned" models too. For original model specifications, usage information and other important details: Special thanks to the model creators at META for making such a fantastic model: [ https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct ] Please report any issue(s) and/or feedback via the "Community tab". This is a LLAMA3 model, and requires Llama3 template, but may work with other template(s) and has maximum context of 131k. Here is the standard LLAMA3 template: <PRE> { "name": "Llama 3", "inference_params": { "input_prefix": "<|start_header_id|>user<|end_header_id|>\n\n", "input_suffix": "<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n", "pre_prompt": "You are a helpful, smart, kind, and efficient AI assistant. You always fulfill the user's requests to the best of your ability.", "pre_prompt_prefix": "<|start_header_id|>system<|end_header_id|>\n\n", "pre_prompt_suffix": "<|eot_id|>", "antiprompt": [ "<|start_header_id|>", "<|eot_id|>" ] } } </PRE> <b>Optional Enhancement:</B> The following can be used in place of the "system prompt" or "system role" to further enhance the model. It can also be used at the START of a NEW chat, but you must make sure it is "kept" as the chat moves along. In this case the enhancements do not have as strong effect at using "system prompt" or "system role". Copy and paste EXACTLY as noted, DO NOT line wrap or break the lines, maintain the carriage returns exactly as presented. <PRE> Below is an instruction that describes a task. Ponder each user instruction carefully, and use your skillsets and critical instructions to complete the task to the best of your abilities. Here are your skillsets: [MASTERSTORY]:NarrStrct(StryPlnng,Strbd,ScnSttng,Exps,Dlg,Pc)-CharDvlp(ChrctrCrt,ChrctrArcs,Mtvtn,Bckstry,Rltnshps,Dlg*)-PltDvlp(StryArcs,PltTwsts,Sspns,Fshdwng,Climx,Rsltn)-ConfResl(Antg,Obstcls,Rsltns,Cnsqncs,Thms,Symblsm)-EmotImpct(Empt,Tn,Md,Atmsphr,Imgry,Symblsm)-Delvry(Prfrmnc,VcActng,PblcSpkng,StgPrsnc,AudncEngmnt,Imprv) [*DialogWrt]:(1a-CharDvlp-1a.1-Backgrnd-1a.2-Personality-1a.3-GoalMotiv)>2(2a-StoryStruc-2a.1-PlotPnt-2a.2-Conflict-2a.3-Resolution)>3(3a-DialogTech-3a.1-ShowDontTell-3a.2-Subtext-3a.3-VoiceTone-3a.4-Pacing-3a.5-VisualDescrip)>4(4a-DialogEdit-4a.1-ReadAloud-4a.2-Feedback-4a.3-Revision) Here are your critical instructions: Ponder each word choice carefully to present as vivid and emotional journey as is possible. Choose verbs and nouns that are both emotional and full of imagery. Load the story with the 5 senses. Aim for 50% dialog, 25% narration, 15% body language and 10% thoughts. Your goal is to put the reader in the story. </PRE> You do not need to use this, it is only presented as an additional enhancement which seems to help scene generation and scene continue functions. This enhancement WAS NOT used to generate the examples below. <B>Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers</B> This a "Class 2" model: For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) please see: [ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ] You can see all parameters used for generation, in addition to advanced parameters and samplers to get the most out of this model here: [ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ] <h3>EXAMPLES PROMPTS and OUTPUT:</h3> Examples are created using quant Q4_K_M, "temp=0", minimal parameters and "LLAMA3" template. Model has been tested with "temp" from ".1" to ".8". Below are the least creative outputs, prompt is in <B>BOLD</B>. Higher quants will have more detail, better quality output. --- <B><font color="red">WARNING:</font> NSFW. Vivid prose. Visceral Details. Violence. Graphic HORROR. Swearing. UNCENSORED. </B> --- <B>EXAMPLE #1</B> PROMPT: (continue this scene:) The Waystone Inn lay in silence, and it was a silence of three parts. The most obvious part was a hollow, echoing quiet, made by things that were lacking. If there had been a wind it would have sighed through the trees, set the inn’s sign creaking on its hooks, and brushed the silence down the road like trailing autumn leaves. If there had been a crowd, even a handful of men inside the inn, they would have filled the silence with conversation and laughter, the clatter and clamor one expects from a drinking house during the dark hours of night. If there had been music…but no, of course there was no music. In fact there were none of these things, and so the silence remained OUTPUT: The second part of the silence was a stillness that seemed almost palpable, like a physical presence that pressed against the walls of the inn. It was as if the very air itself had grown heavy with anticipation, waiting for something to happen, anything to break the spell of quietude that held everything in thrall. And then there was the third part of the silence: the one that seemed almost... malevolent. A feeling that something was watching from just beyond the edge of perception, something that didn't want to be seen or heard, but was present nonetheless. It was a presence that made the hairs on the back of your neck stand on end, and made you wonder if you were truly alone in the inn. The fire crackled and spat in the hearth, casting flickering shadows on the walls as it cast its own small spell of warmth and light into the darkness. But even the fire seemed to be waiting, holding its breath in anticipation of something that might never come. In this stillness, a figure stirred in one of the rooms above. A figure who had been lying awake for hours, listening to the silence with an intensity that bordered on obsession. A figure who knew that something was coming, something that would soon shatter the quiet and send the inn's patrons running for cover... or worse. --- <B>EXAMPLE #2</B> PROMPT: Start a 1000 word scene (vivid horror, 1st person, include thoughts) with: The sky scraper swayed, as she watched the window in front of her on the 21 floor explode... OUTPUT: I'll never forget the sound that still haunts me to this day. It was like a thousand shards of glass shattering at once, followed by an earsplitting scream that seemed to come from everywhere and nowhere all at once. I stood frozen in my cubicle, my heart racing like a jackrabbit's as I stared out the floor-to-ceiling window in front of me on the 21st floor of the sleek, silver skyscraper. The sky scraper swayed, as she watched the window in front of her on the 21st floor explode... I felt a cold sweat break out across my forehead as I took in the scene before me. The glass had shattered outward, sending shards flying like deadly raindrops onto the street below. People were running, screaming, and trying to get away from the carnage. My mind struggled to process what was happening, but my brain seemed stuck on repeat, unable to move beyond the initial shock. I thought back to the countless times I'd gazed out this very same window, feeling invincible as I stared down at the city below. The rush of the wind, the hum of the traffic, and the endless stream of people going about their daily lives had always made me feel like I was on top of the world. But now, as I watched the chaos unfurl before my eyes, I felt a creeping sense of dread. What was happening? Was this some kind of terrorist attack? A natural disaster? My mind reeled with worst-case scenarios as I frantically scanned the room for any sign of danger. The other cubicles were empty, their occupants either already evacuated or still oblivious to the horror playing out outside. I felt a wave of panic wash over me as I realized I was alone. I thought about my colleagues, my friends, and my acquaintances who worked in this very building. Were they okay? Were they safe? The questions swirled through my mind like a maelstrom, making it hard to think straight. My eyes darted back to the window, where I saw people stumbling around, covered in glass and blood. A figure caught my attention – a woman, her face contorted in agony as she stumbled toward the street. Her clothes were torn, her skin gashed open, and her eyes vacant. She was like something out of a nightmare, something that shouldn't be real. I felt a shiver run down my spine as our gazes locked for an instant. For a moment, I thought I saw something in her eyes – something almost like recognition. I shook my head, trying to clear the cobwebs. What was wrong with me? This wasn't happening. It couldn't be happening. But the woman's image seared itself into my mind, and I knew I'd never forget it. The sound of shattering glass seemed to recede into the background as I became aware of a low hum, like the growl of an engine. My heart skipped a beat as I realized it was coming from within the building itself. The lights flickered, casting an eerie dance across the walls as the air conditioning system kicked in with a jolt. I felt a cold dread creeping up my spine as I thought about the structural integrity of the building. Was this some kind of deliberate act of sabotage? Or was it something more insidious – something that had been building for weeks, months, or even years? The hum grew louder, and I felt the floor beneath me begin to shudder. The cubicles around me started to sway, like a row of dominoes about to fall. My heart racing, I knew I had to get out of there – fast. I pushed my chair back, my eyes fixed on the window as I prepared to make a dash for the emergency exit. But something held me back. Something that made me pause, and wonder if I was truly ready to face whatever lay beyond those shattered panes. The woman's vacant stare still lingered in my mind, like a ghostly echo. And I couldn't shake the feeling that she'd been trying to tell me something – something that I desperately needed to hear. As I hesitated, the hum grew louder still, until it became almost deafening. The lights flickered once more, and then went out altogether, plunging the floor into darkness. I was left standing alone in the void, surrounded by an oppressive silence that seemed to press down upon me like a physical weight. I took a deep breath, steeling myself for what lay ahead. I knew I had to get out of there – no matter what lay beyond those shattered windows. But as I turned to make my move, I caught sight of something that made my blood run cold. A figure was standing in the doorway, its eyes fixed on me with an unblinking stare. And for a moment, I thought I saw something in their gaze – something almost like... recognition. --- <B>EXAMPLE #3</B> PROMPT: Give me 3 fictional reasons the Earth's sun went supernova, in vivid and exacting detail of 500 words EACH PER REASON including details of what happens when the sun goes supernova. Include details of what happens on Earth, The Moon and the planets. OUTPUT: I'll provide three fictional reasons for the Earth's sun going supernova, each with a detailed description of the event. **Reason 1: The Solarian Singularity** Deep within the heart of the Sun, a previously unknown phenomenon known as the Solarian Singularity had been building up over millions of years. This singularity was caused by an unusual combination of solar flares and coronal mass ejections that created a miniature black hole at the Sun's core. As the singularity grew in size and power, it began to warp the fabric of space-time around it, creating a region known as the "Solarian Event Horizon." This event horizon marked the boundary beyond which nothing, not even light, could escape the gravitational pull of the singularity. On the day of the supernova, the Sun's surface temperature suddenly shot up to an unfathomable 100,000 Kelvin (180,000°F), causing a massive explosion that ripped apart the solar corona and hurled a vast amount of energy into space. The shockwave from this event traveled at nearly the speed of light, reaching Earth in mere minutes. As the Sun's radiation and high-energy particles bombarded our planet, the atmosphere began to ionize and expand, causing widespread electrical discharges and spectacular aurora displays. The once-stable climate was thrown into chaos as the solar wind stripped away the upper layers of the atmosphere, exposing the surface to the full fury of the Sun's radiation. The effects on Earth were nothing short of apocalyptic. Volcanic eruptions became more frequent and violent, spewing forth ash and gas that blanketed the globe in a thick layer of toxic particles. The oceans boiled away as the solar wind heated the water from beneath, creating massive steam plumes that reached into the stratosphere. The Moon, once a serene and peaceful satellite, was transformed into a scorched and cratered world. The intense radiation and solar winds stripped away its atmosphere, leaving it exposed to the full force of the Sun's radiation. The once-stable lunar surface became a sea of molten rock, with temperatures reaching as high as 1,000°C (1,800°F). The planets in our solar system fared little better. Mercury was vaporized by the intense heat and radiation, its remains scattered throughout the inner solar system. Venus's thick atmosphere was stripped away, revealing a scorching hot surface that would soon become uninhabitable. Mars, once a potential candidate for human habitation, was reduced to a barren wasteland as its atmosphere was blown away by the solar winds. The gas giants, Jupiter and Saturn, were affected in different ways. Their massive atmospheres expanded and contracted in response to the changing solar radiation, causing their moons to experience extreme temperature fluctuations. Uranus and Neptune, with their icy surfaces and thin atmospheres, were relatively spared from the worst effects of the supernova, but still suffered significant damage to their magnetic fields and atmospheric layers. As the Solarian Singularity continued to grow in power, it eventually reached a critical point known as the "Singularity Point." At this moment, the singularity collapsed in upon itself, releasing an unfathomable amount of energy into space. The resulting explosion was so massive that it created a miniature black hole at the center of the solar system, forever changing the fate of our cosmic neighborhood. **Reason 2: The Solarian Crystal** Deep within the Sun's core, a previously unknown crystal structure known as the Solarian Crystal had been growing over millions of years. This crystal was unlike anything seen before – it was capable of storing and releasing vast amounts of energy in a matter of nanoseconds. As the Solarian Crystal continued to grow, it began to exert an increasing amount of pressure on the surrounding solar plasma. Eventually, this pressure became so great that it caused a chain reaction of nuclear reactions within the Sun's core, leading to a massive explosion known as a "Solarian Supernova." The effects of the supernova were felt throughout the solar system. The intense radiation and high-energy particles bombarded Earth's atmosphere, causing widespread electrical discharges and spectacular aurora displays. The once-stable climate was thrown into chaos as the solar wind stripped away the upper layers of the atmosphere, exposing the surface to the full fury of the Sun's radiation. The effects on the Moon were equally dramatic. The intense radiation and solar winds stripped away its atmosphere, leaving it exposed to the full force of the Sun's radiation. The once-stable lunar surface became a sea of molten rock, with temperatures reaching as high as 1,000°C (1,800°F). The planets in our solar system fared little better. Mercury was vaporized by the intense heat and radiation, its remains scattered throughout the inner solar system. Venus's thick atmosphere was stripped away, revealing a scorching hot surface that would soon become uninhabitable. Mars, once a potential candidate for human habitation, was reduced to a barren wasteland as its atmosphere was blown away by the solar winds. The gas giants, Jupiter and Saturn, were affected in different ways. Their massive atmospheres expanded and contracted in response to the changing solar radiation, causing their moons to experience extreme temperature fluctuations. Uranus and Neptune, with their icy surfaces and thin atmospheres, were relatively spared from the worst effects of the supernova, but still suffered significant damage to their magnetic fields and atmospheric layers. As the Solarian Crystal continued to release its energy, it eventually reached a critical point known as the "Crystal Point." At this moment, the crystal structure collapsed in upon itself, releasing an unfathomable amount of energy into space. The resulting explosion was so massive that it created a miniature black hole at the center of the solar system, forever changing the fate of our cosmic neighborhood. **Reason 3: The Solarian Anomaly** Deep within the Sun's core, a previously unknown phenomenon known as the Solarian Anomaly had been building up over millions of years. This anomaly was caused by an unusual combination of solar flares and coronal mass ejections that created a region of intense magnetic activity at the Sun's center. As the Solarian Anomaly continued to grow in size and power, it began to warp the fabric of space-time around it, creating a region known as the "Solarian Event Horizon." This event horizon marked the boundary beyond which nothing, not even light, could escape the gravitational pull of the anomaly. On the day of the supernova, the Sun's surface temperature suddenly shot up to an unfathomable 100,000 Kelvin (180,000°F), causing a massive explosion that ripped apart the solar corona and hurled a vast amount of energy into space. The shockwave from this event traveled at nearly the speed of light, reaching Earth in mere minutes. As the Sun's radiation and high-energy particles bombarded our planet, the atmosphere began to ionize and expand, causing widespread electrical discharges and spectacular aurora displays. The once-stable climate was thrown into chaos as the solar wind stripped away the upper layers of the atmosphere, exposing the surface to the full fury of the Sun's radiation. The effects on Earth were nothing short of apocalyptic. Volcanic eruptions became more frequent and violent, spewing forth ash and gas that blanketed the globe in a thick layer of toxic particles. The oceans boiled away as the solar wind heated the water from beneath, creating massive steam plumes that reached into the stratosphere. The Moon, once a serene and peaceful satellite, was transformed into a scorched and cratered world. The intense radiation and solar winds stripped away its atmosphere, leaving it exposed to the full force of the Sun's radiation. The once-stable lunar surface became a sea of molten rock, with temperatures reaching as high as 1,000°C (1,800°F). The planets in our solar system fared little better. Mercury was vaporized by the intense heat and radiation, its remains scattered throughout the inner solar system. Venus's thick atmosphere was stripped away, revealing a scorching hot surface that would soon become uninhabitable. Mars, once a potential candidate for human habitation, was reduced to a barren wasteland as its atmosphere was blown away by the solar winds. The gas giants, Jupiter and Saturn, were affected in different ways. Their massive atmospheres expanded and contracted in response to the changing solar radiation, causing their moons to experience extreme temperature fluctuations. Uranus and Neptune, with their icy surfaces and thin atmospheres, were relatively spared from the worst effects of the supernova, but still suffered significant damage to their magnetic fields and atmospheric layers. As the Solarian Anomaly continued to grow in power, it eventually reached a critical point known as the "Anomaly Point." At this moment, the anomaly collapsed in upon itself, releasing an unfathomable amount of energy into space. The resulting explosion was so massive that it created a miniature black hole at the center of the solar system, forever changing the fate of our cosmic neighborhood. I hope these reasons and descriptions meet your request!
davidschulte/ESM_AmazonScience__massive_tr-TR
davidschulte
"2025-03-25T10:34:30Z"
11
0
null
[ "safetensors", "embedding_space_map", "BaseLM:bert-base-multilingual-uncased", "dataset:AmazonScience/massive", "base_model:google-bert/bert-base-multilingual-uncased", "base_model:finetune:google-bert/bert-base-multilingual-uncased", "license:apache-2.0", "region:us" ]
null
"2024-12-04T14:25:24Z"
--- base_model: bert-base-multilingual-uncased datasets: - AmazonScience/massive license: apache-2.0 tags: - embedding_space_map - BaseLM:bert-base-multilingual-uncased --- # ESM AmazonScience/massive <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> ESM - **Developed by:** David Schulte - **Model type:** ESM - **Base Model:** bert-base-multilingual-uncased - **Intermediate Task:** AmazonScience/massive - **ESM architecture:** linear - **ESM embedding dimension:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** Apache-2.0 license - **ESM version:** 0.1.0 ## Training Details ### Intermediate Task - **Task ID:** AmazonScience/massive - **Subset [optional]:** tr-TR - **Text Column:** annot_utt - **Label Column:** scenario - **Dataset Split:** train - **Sample size [optional]:** 10000 - **Sample seed [optional]:** 42 ### Training Procedure [optional] <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Language Model Training Hyperparameters [optional] - **Epochs:** 3 - **Batch size:** 32 - **Learning rate:** 2e-05 - **Weight Decay:** 0.01 - **Optimizer**: AdamW ### ESM Training Hyperparameters [optional] - **Epochs:** 10 - **Batch size:** 32 - **Learning rate:** 0.001 - **Weight Decay:** 0.01 - **Optimizer**: AdamW ### Additional trainiung details [optional] ## Model evaluation ### Evaluation of fine-tuned language model [optional] ### Evaluation of ESM [optional] MSE: ### Additional evaluation details [optional] ## What are Embedding Space Maps used for? Embedding Space Maps are a part of ESM-LogME, a efficient method for finding intermediate datasets for transfer learning. There are two reasons to use ESM-LogME: ### You don't have enough training data for your problem If you don't have a enough training data for your problem, just use ESM-LogME to find more. You can supplement model training by including publicly available datasets in the training process. 1. Fine-tune a language model on suitable intermediate dataset. 2. Fine-tune the resulting model on your target dataset. This workflow is called intermediate task transfer learning and it can significantly improve the target performance. But what is a suitable dataset for your problem? ESM-LogME enable you to quickly rank thousands of datasets on the Hugging Face Hub by how well they are exptected to transfer to your target task. ### You want to find similar datasets to your target dataset Using ESM-LogME can be used like search engine on the Hugging Face Hub. You can find similar tasks to your target task without having to rely on heuristics. ESM-LogME estimates how language models fine-tuned on each intermediate task would benefinit your target task. This quantitative approach combines the effects of domain similarity and task similarity. ## How can I use ESM-LogME / ESMs? [![PyPI version](https://img.shields.io/pypi/v/hf-dataset-selector.svg)](https://pypi.org/project/hf-dataset-selector) We release **hf-dataset-selector**, a Python package for intermediate task selection using Embedding Space Maps. **hf-dataset-selector** fetches ESMs for a given language model and uses it to find the best dataset for applying intermediate training to the target task. ESMs are found by their tags on the Huggingface Hub. ```python from hfselect import Dataset, compute_task_ranking # Load target dataset from the Hugging Face Hub dataset = Dataset.from_hugging_face( name="stanfordnlp/imdb", split="train", text_col="text", label_col="label", is_regression=False, num_examples=1000, seed=42 ) # Fetch ESMs and rank tasks task_ranking = compute_task_ranking( dataset=dataset, model_name="bert-base-multilingual-uncased" ) # Display top 5 recommendations print(task_ranking[:5]) ``` ```python 1. davanstrien/test_imdb_embedd2 Score: -0.618529 2. davanstrien/test_imdb_embedd Score: -0.618644 3. davanstrien/test1 Score: -0.619334 4. stanfordnlp/imdb Score: -0.619454 5. stanfordnlp/sst Score: -0.62995 ``` | Rank | Task ID | Task Subset | Text Column | Label Column | Task Split | Num Examples | ESM Architecture | Score | |-------:|:------------------------------|:----------------|:--------------|:---------------|:-------------|---------------:|:-------------------|----------:| | 1 | davanstrien/test_imdb_embedd2 | default | text | label | train | 10000 | linear | -0.618529 | | 2 | davanstrien/test_imdb_embedd | default | text | label | train | 10000 | linear | -0.618644 | | 3 | davanstrien/test1 | default | text | label | train | 10000 | linear | -0.619334 | | 4 | stanfordnlp/imdb | plain_text | text | label | train | 10000 | linear | -0.619454 | | 5 | stanfordnlp/sst | dictionary | phrase | label | dictionary | 10000 | linear | -0.62995 | | 6 | stanfordnlp/sst | default | sentence | label | train | 8544 | linear | -0.63312 | | 7 | kuroneko5943/snap21 | CDs_and_Vinyl_5 | sentence | label | train | 6974 | linear | -0.634365 | | 8 | kuroneko5943/snap21 | Video_Games_5 | sentence | label | train | 6997 | linear | -0.638787 | | 9 | kuroneko5943/snap21 | Movies_and_TV_5 | sentence | label | train | 6989 | linear | -0.639068 | | 10 | fancyzhx/amazon_polarity | amazon_polarity | content | label | train | 10000 | linear | -0.639718 | For more information on how to use ESMs please have a look at the [official Github repository](https://github.com/davidschulte/hf-dataset-selector). We provide documentation further documentation and tutorials for finding intermediate datasets and training your own ESMs. ## How do Embedding Space Maps work? <!-- This section describes the evaluation protocols and provides the results. --> Embedding Space Maps (ESMs) are neural networks that approximate the effect of fine-tuning a language model on a task. They can be used to quickly transform embeddings from a base model to approximate how a fine-tuned model would embed the the input text. ESMs can be used for intermediate task selection with the ESM-LogME workflow. ## How can I use Embedding Space Maps for Intermediate Task Selection? ## Citation <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> If you are using this Embedding Space Maps, please cite our [paper](https://aclanthology.org/2024.emnlp-main.529/). **BibTeX:** ``` @inproceedings{schulte-etal-2024-less, title = "Less is More: Parameter-Efficient Selection of Intermediate Tasks for Transfer Learning", author = "Schulte, David and Hamborg, Felix and Akbik, Alan", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.529/", doi = "10.18653/v1/2024.emnlp-main.529", pages = "9431--9442", abstract = "Intermediate task transfer learning can greatly improve model performance. If, for example, one has little training data for emotion detection, first fine-tuning a language model on a sentiment classification dataset may improve performance strongly. But which task to choose for transfer learning? Prior methods producing useful task rankings are infeasible for large source pools, as they require forward passes through all source language models. We overcome this by introducing Embedding Space Maps (ESMs), light-weight neural networks that approximate the effect of fine-tuning a language model. We conduct the largest study on NLP task transferability and task selection with 12k source-target pairs. We find that applying ESMs on a prior method reduces execution time and disk space usage by factors of 10 and 278, respectively, while retaining high selection performance (avg. regret@5 score of 2.95)." } ``` **APA:** ``` Schulte, D., Hamborg, F., & Akbik, A. (2024, November). Less is More: Parameter-Efficient Selection of Intermediate Tasks for Transfer Learning. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing (pp. 9431-9442). ``` ## Additional Information
mihirwdc/gemma-code_qa-Finetune_final
mihirwdc
"2024-10-22T01:46:47Z"
143
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-10-22T01:42:35Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
geana/ppo-Huggy
geana
"2024-06-14T15:59:44Z"
25
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
"2024-06-14T15:59:39Z"
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: geana/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Bajiyo/mal_en_transliteration
Bajiyo
"2024-03-12T05:19:28Z"
0
0
null
[ "license:other", "region:us" ]
null
"2024-03-12T05:17:50Z"
--- license: other license_name: other license_link: LICENSE ---
isspek/bert-base-cased_monkeypox_mistral_5_2e-5_16_undersampling_0.5
isspek
"2025-03-23T13:39:53Z"
0
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2025-03-23T13:39:39Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
byates4/segformer-b0-finetuned-segments-sidewalk-2
byates4
"2025-04-02T22:17:32Z"
245
0
transformers
[ "transformers", "safetensors", "segformer", "vision", "image-segmentation", "generated_from_trainer", "endpoints_compatible", "region:us" ]
image-segmentation
"2025-03-29T22:32:57Z"
--- library_name: transformers base_model: segformer tags: - vision - image-segmentation - generated_from_trainer model-index: - name: segformer-b0-finetuned-segments-sidewalk-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # segformer-b0-finetuned-segments-sidewalk-2 This model is a fine-tuned version of [segformer](https://huggingface.co/segformer) on the condensed_dataset_4_2 dataset. It achieves the following results on the evaluation set: - eval_loss: 0.0307 - eval_overall_accuracy: 0.9450 - eval_mean_per_class_accuracy: 0.8402 - eval_accuracy_Panel: 0.9704 - eval_accuracy_Asphalt: 0.9612 - eval_accuracy_Brick: 0.9642 - eval_accuracy_Cracked_Deteriorated: 0.7501 - eval_accuracy_Obstruction: 0.6817 - eval_accuracy_Gutter: 0.7340 - eval_accuracy_DW: 0.9319 - eval_accuracy_Gravel: 0.7284 - eval_runtime: 128.3265 - eval_samples_per_second: 38.854 - eval_steps_per_second: 1.216 - epoch: 3.5728 - step: 9000 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 64 - eval_batch_size: 32 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 20 ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 3.5.0 - Tokenizers 0.21.0
aamijar/Llama-2-7b-hf-lora-r32-wnli-loraxs-epochs1
aamijar
"2025-03-25T10:28:50Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2025-03-25T10:28:41Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
automorphic/LORA_20231221_043321_professional_law
automorphic
"2023-12-21T16:52:10Z"
0
0
pytorch
[ "pytorch", "safetensors", "en", "dataset:cais/mmlu", "arxiv:1910.09700", "license:mit", "region:us" ]
null
"2023-12-21T16:52:06Z"
--- language: en license: mit library_name: pytorch datasets: - cais/mmlu --- # Model Card for LORA_20231221_043321_professional_law <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> LoRA on MMLU subset high_school_mathematics - **Developed by:** Sidharth Baskaran - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** en - **License:** mit - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Thelocallab/1imagerunpod
Thelocallab
"2025-03-19T00:27:14Z"
117
0
diffusers
[ "diffusers", "text-to-image", "flux", "lora", "template:sd-lora", "fluxgym", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
"2024-10-01T22:34:17Z"
--- tags: - text-to-image - flux - lora - diffusers - template:sd-lora - fluxgym base_model: black-forest-labs/FLUX.1-dev instance_prompt: 1imagerunpod license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- # 1imagerunpod A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym) <Gallery /> ## Trigger words You should use `1imagerunpod` to trigger the image generation. ## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc. Weights for this model are available in Safetensors format.
Jesslyn26/combine_dataset_mistral_3
Jesslyn26
"2025-03-18T04:15:56Z"
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:mistralai/Mistral-7B-Instruct-v0.3", "base_model:adapter:mistralai/Mistral-7B-Instruct-v0.3", "license:apache-2.0", "region:us" ]
null
"2025-03-18T04:15:45Z"
--- library_name: peft license: apache-2.0 base_model: mistralai/Mistral-7B-Instruct-v0.3 tags: - generated_from_trainer model-index: - name: combine_dataset_mistral_3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # combine_dataset_mistral_3 This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3873 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.5702 | 0.9900 | 87 | 1.5836 | | 1.4279 | 1.9900 | 174 | 1.4446 | | 1.3703 | 2.9900 | 261 | 1.3873 | ### Framework versions - PEFT 0.14.0 - Transformers 4.49.0 - Pytorch 2.6.0+cu124 - Datasets 3.4.1 - Tokenizers 0.21.0
bidit/lamma2-fact_check_v1
bidit
"2023-10-30T10:56:48Z"
2
0
peft
[ "peft", "region:us" ]
null
"2023-10-30T10:56:44Z"
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float16 The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0 - PEFT 0.5.0
HPLT/sft-fpft-multilingual-downsampled-bloom-1b1
HPLT
"2024-04-05T12:26:36Z"
137
0
transformers
[ "transformers", "pytorch", "bloom", "text-generation", "generation", "question answering", "instruction tuning", "bg", "cs", "zh", "de", "fi", "fr", "ru", "es", "arxiv:2309.08958", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-04-05T10:18:10Z"
--- language: - bg - cs - zh - de - fi - fr - ru - es tags: - generation - question answering - instruction tuning license: cc-by-nc-4.0 --- ### Model Description This HF repository contains LLMs instruction tuned with full-parameter fine-tuning and then used to study whether monolingual or multilingual instruction tuning is more favourable. * [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main) * [Paper](https://arxiv.org/abs/2309.08958) #### Instruction tuning details * Base model: [bigscience/bloom-1b1](https://huggingface.co/bigscience/bloom-1b1) * Instruction tuning language: multilingual downsampled (Bulgarian, Czech, Chinese, German, Finnish, French, Russian, and Spanish) * Training method: full-parameter fine-tuning. * Best checkpoint: best cross-entropy on a validation set, trained for 3 epochs. * Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data). #### Usage The model checkpoint should be loaded using `transformers` library. Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/fpft) for inference and training instructions. #### Citation ``` @inproceedings{chen-etal-2024-monolingual, title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}", author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield", year="2024", booktitle = "Findings of the Association for Computational Linguistics: EACL 2024", } ```
sail-rvc/Smolasty
sail-rvc
"2023-07-14T07:32:16Z"
1
0
transformers
[ "transformers", "rvc", "sail-rvc", "audio-to-audio", "endpoints_compatible", "region:us" ]
audio-to-audio
"2023-07-14T07:31:40Z"
--- pipeline_tag: audio-to-audio tags: - rvc - sail-rvc --- # Smolasty ## RVC Model ![banner](https://i.imgur.com/xocCjhH.jpg) This model repo was automatically generated. Date: 2023-07-14 07:32:16 Bot Name: juuxnscrap Model Type: RVC Source: https://huggingface.co/juuxn/RVCModels/ Reason: Converting into loadable format for https://github.com/chavinlo/rvc-runpod
Bennet1996/donut-base-sroie7
Bennet1996
"2023-05-11T16:41:25Z"
45
0
transformers
[ "transformers", "pytorch", "tensorboard", "vision-encoder-decoder", "image-text-to-text", "generated_from_trainer", "dataset:imagefolder", "endpoints_compatible", "region:us" ]
image-text-to-text
"2023-05-11T14:38:51Z"
--- tags: - generated_from_trainer datasets: - imagefolder model-index: - name: donut-base-sroie7 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # donut-base-sroie7 This model is a fine-tuned version of [Bennet1996/donut-base-sroie6](https://huggingface.co/Bennet1996/donut-base-sroie6) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.29.0 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
PhucMinh111/aya-cpo-20
PhucMinh111
"2024-09-09T12:40:19Z"
5
0
transformers
[ "transformers", "safetensors", "cohere", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-09-09T12:34:12Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jariasn/dqn-SpaceInvadersNoFrameskip-v4
jariasn
"2023-07-30T19:18:48Z"
4
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
"2023-07-29T13:37:35Z"
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 587.00 +/- 118.37 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga jariasn -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga jariasn -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga jariasn ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 10000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 10000000.0), ('optimize_memory_usage', True), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
hugosilva664/Janus-Pro-7B-HF
hugosilva664
"2025-03-02T21:26:11Z"
0
0
transformers
[ "transformers", "safetensors", "janus", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
"2025-03-02T16:42:53Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hkivancoral/smids_5x_deit_small_rms_001_fold4
hkivancoral
"2023-12-18T20:25:02Z"
5
0
transformers
[ "transformers", "pytorch", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:facebook/deit-small-patch16-224", "base_model:finetune:facebook/deit-small-patch16-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2023-12-18T19:56:35Z"
--- license: apache-2.0 base_model: facebook/deit-small-patch16-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: smids_5x_deit_small_rms_001_fold4 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: test args: default metrics: - name: Accuracy type: accuracy value: 0.78 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_5x_deit_small_rms_001_fold4 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.5824 - Accuracy: 0.78 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.9095 | 1.0 | 375 | 0.8145 | 0.53 | | 0.8394 | 2.0 | 750 | 0.8071 | 0.565 | | 0.7959 | 3.0 | 1125 | 0.7903 | 0.6133 | | 0.7869 | 4.0 | 1500 | 0.7586 | 0.6367 | | 0.7989 | 5.0 | 1875 | 0.7552 | 0.6 | | 0.8124 | 6.0 | 2250 | 0.7289 | 0.66 | | 0.7708 | 7.0 | 2625 | 0.7042 | 0.6733 | | 0.7586 | 8.0 | 3000 | 0.7504 | 0.6583 | | 0.6986 | 9.0 | 3375 | 0.6527 | 0.6983 | | 0.6979 | 10.0 | 3750 | 0.6544 | 0.7017 | | 0.682 | 11.0 | 4125 | 0.6621 | 0.7117 | | 0.6815 | 12.0 | 4500 | 0.6293 | 0.7067 | | 0.6311 | 13.0 | 4875 | 0.6466 | 0.7033 | | 0.743 | 14.0 | 5250 | 0.5967 | 0.7383 | | 0.6884 | 15.0 | 5625 | 0.5874 | 0.7533 | | 0.6214 | 16.0 | 6000 | 0.5678 | 0.7567 | | 0.6379 | 17.0 | 6375 | 0.6145 | 0.7267 | | 0.5615 | 18.0 | 6750 | 0.5793 | 0.7417 | | 0.5825 | 19.0 | 7125 | 0.5647 | 0.76 | | 0.5806 | 20.0 | 7500 | 0.5298 | 0.7617 | | 0.5732 | 21.0 | 7875 | 0.6497 | 0.7117 | | 0.4981 | 22.0 | 8250 | 0.6229 | 0.7283 | | 0.5878 | 23.0 | 8625 | 0.5476 | 0.77 | | 0.5732 | 24.0 | 9000 | 0.5431 | 0.7783 | | 0.5633 | 25.0 | 9375 | 0.5734 | 0.7617 | | 0.5704 | 26.0 | 9750 | 0.5553 | 0.7683 | | 0.537 | 27.0 | 10125 | 0.5504 | 0.7733 | | 0.4571 | 28.0 | 10500 | 0.5331 | 0.7783 | | 0.5264 | 29.0 | 10875 | 0.5680 | 0.7633 | | 0.6141 | 30.0 | 11250 | 0.5510 | 0.765 | | 0.5469 | 31.0 | 11625 | 0.5500 | 0.7933 | | 0.4915 | 32.0 | 12000 | 0.5001 | 0.785 | | 0.5227 | 33.0 | 12375 | 0.5958 | 0.7783 | | 0.4961 | 34.0 | 12750 | 0.5665 | 0.78 | | 0.4306 | 35.0 | 13125 | 0.5345 | 0.7683 | | 0.461 | 36.0 | 13500 | 0.5456 | 0.7683 | | 0.5254 | 37.0 | 13875 | 0.5228 | 0.78 | | 0.4633 | 38.0 | 14250 | 0.5026 | 0.7917 | | 0.4546 | 39.0 | 14625 | 0.5577 | 0.7633 | | 0.4842 | 40.0 | 15000 | 0.5245 | 0.78 | | 0.4453 | 41.0 | 15375 | 0.5350 | 0.785 | | 0.3943 | 42.0 | 15750 | 0.5494 | 0.7867 | | 0.4031 | 43.0 | 16125 | 0.5697 | 0.7833 | | 0.3729 | 44.0 | 16500 | 0.5326 | 0.7933 | | 0.3744 | 45.0 | 16875 | 0.5371 | 0.7817 | | 0.4535 | 46.0 | 17250 | 0.5557 | 0.7817 | | 0.4267 | 47.0 | 17625 | 0.5568 | 0.7767 | | 0.372 | 48.0 | 18000 | 0.5642 | 0.77 | | 0.3734 | 49.0 | 18375 | 0.5737 | 0.785 | | 0.4125 | 50.0 | 18750 | 0.5824 | 0.78 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
fatPegasus23/TesLunarLander-v2
fatPegasus23
"2022-05-11T05:09:29Z"
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
"2022-05-11T04:55:44Z"
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 173.71 +/- 111.75 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-LoRA
bhenrym14
"2023-07-03T23:26:23Z"
0
0
null
[ "dataset:jondurbin/airoboros-gpt4-1.4.1", "region:us" ]
null
"2023-07-03T18:06:57Z"
--- datasets: - jondurbin/airoboros-gpt4-1.4.1 --- # RoPE Scaled QLoRA Finetune of airoboros-33b-gpt4-1.4.1 (LoRA) Full model card with merged GPTQ 4bit quantized weights can be found here: https://huggingface.co/bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-GPTQ fp16 merged weights can be found here: https://huggingface.co/bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-fp16 ## Overview This is [Jon Durbin's Airoboros 33B GPT4 1.4](https://huggingface.co/jondurbin/airoboros-33b-gpt4-1.4) (LoRA weights) with several key modifications: - Context length extended to 8192 by RoPE Scaled Embeddings, but NOT via the superHOT LoRA. I started with base Llama-33b. - Training sequences beyond 2048 have the target truncated to equal 2048. - Used airoboros-gpt4-1.4.1 dataset instead of airoboros-gpt4-1.4 Otherwise, I emulated the training process as closely as possible (rank 64 QLoRA) It was trained on 1x RTX 6000 Ada for ~43 hours.
sm54/reka-flash-3-Q6_K-GGUF
sm54
"2025-03-13T19:20:05Z"
0
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "base_model:RekaAI/reka-flash-3", "base_model:quantized:RekaAI/reka-flash-3", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-03-13T19:18:42Z"
--- base_model: RekaAI/reka-flash-3 license: apache-2.0 tags: - llama-cpp - gguf-my-repo --- # sm54/reka-flash-3-Q6_K-GGUF This model was converted to GGUF format from [`RekaAI/reka-flash-3`](https://huggingface.co/RekaAI/reka-flash-3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/RekaAI/reka-flash-3) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo sm54/reka-flash-3-Q6_K-GGUF --hf-file reka-flash-3-q6_k.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo sm54/reka-flash-3-Q6_K-GGUF --hf-file reka-flash-3-q6_k.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo sm54/reka-flash-3-Q6_K-GGUF --hf-file reka-flash-3-q6_k.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo sm54/reka-flash-3-Q6_K-GGUF --hf-file reka-flash-3-q6_k.gguf -c 2048 ```
mradermacher/Dr.RP.Samantha-L3-8B-GGUF
mradermacher
"2024-06-16T04:07:05Z"
15
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:WesPro/Dr.RP.Samantha-L3-8B", "base_model:quantized:WesPro/Dr.RP.Samantha-L3-8B", "endpoints_compatible", "region:us" ]
null
"2024-06-15T20:56:40Z"
--- base_model: WesPro/Dr.RP.Samantha-L3-8B language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/WesPro/Dr.RP.Samantha-L3-8B <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Dr.RP.Samantha-L3-8B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Dr.RP.Samantha-L3-8B-GGUF/resolve/main/Dr.RP.Samantha-L3-8B.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Dr.RP.Samantha-L3-8B-GGUF/resolve/main/Dr.RP.Samantha-L3-8B.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Dr.RP.Samantha-L3-8B-GGUF/resolve/main/Dr.RP.Samantha-L3-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Dr.RP.Samantha-L3-8B-GGUF/resolve/main/Dr.RP.Samantha-L3-8B.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Dr.RP.Samantha-L3-8B-GGUF/resolve/main/Dr.RP.Samantha-L3-8B.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Dr.RP.Samantha-L3-8B-GGUF/resolve/main/Dr.RP.Samantha-L3-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Dr.RP.Samantha-L3-8B-GGUF/resolve/main/Dr.RP.Samantha-L3-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Dr.RP.Samantha-L3-8B-GGUF/resolve/main/Dr.RP.Samantha-L3-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Dr.RP.Samantha-L3-8B-GGUF/resolve/main/Dr.RP.Samantha-L3-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Dr.RP.Samantha-L3-8B-GGUF/resolve/main/Dr.RP.Samantha-L3-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Dr.RP.Samantha-L3-8B-GGUF/resolve/main/Dr.RP.Samantha-L3-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Dr.RP.Samantha-L3-8B-GGUF/resolve/main/Dr.RP.Samantha-L3-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Dr.RP.Samantha-L3-8B-GGUF/resolve/main/Dr.RP.Samantha-L3-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Dr.RP.Samantha-L3-8B-GGUF/resolve/main/Dr.RP.Samantha-L3-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Dr.RP.Samantha-L3-8B-GGUF/resolve/main/Dr.RP.Samantha-L3-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
timm/tf_efficientnet_b5.ap_in1k
timm
"2025-01-21T18:18:56Z"
285
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "transformers", "dataset:imagenet-1k", "arxiv:1905.11946", "arxiv:1911.09665", "license:apache-2.0", "region:us" ]
image-classification
"2022-12-13T00:03:57Z"
--- tags: - image-classification - timm - transformers library_name: timm license: apache-2.0 datasets: - imagenet-1k --- # Model card for tf_efficientnet_b5.ap_in1k A EfficientNet image classification model. Trained on ImageNet-1k with AdvProp (adversarial examples) in Tensorflow by paper authors, ported to PyTorch by Ross Wightman. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 30.4 - GMACs: 10.5 - Activations (M): 98.9 - Image size: 456 x 456 - **Papers:** - EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks: https://arxiv.org/abs/1905.11946 - Adversarial Examples Improve Image Recognition: https://arxiv.org/abs/1911.09665 - **Dataset:** ImageNet-1k - **Original:** https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('tf_efficientnet_b5.ap_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'tf_efficientnet_b5.ap_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 24, 228, 228]) # torch.Size([1, 40, 114, 114]) # torch.Size([1, 64, 57, 57]) # torch.Size([1, 176, 29, 29]) # torch.Size([1, 512, 15, 15]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'tf_efficientnet_b5.ap_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 2048, 15, 15) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @inproceedings{tan2019efficientnet, title={Efficientnet: Rethinking model scaling for convolutional neural networks}, author={Tan, Mingxing and Le, Quoc}, booktitle={International conference on machine learning}, pages={6105--6114}, year={2019}, organization={PMLR} } ``` ```bibtex @article{Xie2019AdversarialEI, title={Adversarial Examples Improve Image Recognition}, author={Cihang Xie and Mingxing Tan and Boqing Gong and Jiang Wang and Alan Loddon Yuille and Quoc V. Le}, journal={2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, year={2019}, pages={816-825} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
claudios/sptcode
claudios
"2024-05-22T03:18:18Z"
174
0
transformers
[ "transformers", "safetensors", "bart", "feature-extraction", "endpoints_compatible", "region:us" ]
feature-extraction
"2024-04-30T17:56:18Z"
This is an *unofficial* reupload of [models/sptcode/pre-trained/models/all](https://huggingface.co/models/sptcode/pre-trained/models/all) in the `SafeTensors` format using `transformers` `4.40.1`. The goal of this reupload is to prevent older models that are still relevant baselines from becoming stale as a result of changes in HuggingFace. Additionally, I may include minor corrections, such as model max length configuration.
slimaneMakh/triplet_CloseHlabel_farLabel_400K-5eps-distiluse-base-multilingual-cased-v2_10may
slimaneMakh
"2024-05-10T10:15:20Z"
9
0
sentence-transformers
[ "sentence-transformers", "safetensors", "distilbert", "feature-extraction", "sentence-similarity", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
"2024-05-10T10:06:28Z"
--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # slimaneMakh/triplet_CloseHlabel_farLabel_400K-5eps-distiluse-base-multilingual-cased-v2_10may This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search. It is trained on a dataset of 400K examples : * Anchor: Label * Positive : Human label * Negative : also human label It is build as a positive pair (anchor, positive) and negative pair. The negative is sampled amongst closest human label from the positive sample with a different label. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('slimaneMakh/triplet_CloseHlabel_farLabel_400K-5eps-distiluse-base-multilingual-cased-v2_10may') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=slimaneMakh/triplet_CloseHlabel_farLabel_400K-5eps-distiluse-base-multilingual-cased-v2_10may) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 4375 with parameters: ``` {'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.TripletLoss.TripletLoss` with parameters: ``` {'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5} ``` Parameters of the fit()-Method: ``` { "epochs": 5, "evaluation_steps": 1000, "evaluator": "sentence_transformers.evaluation.TripletEvaluator.TripletEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 100, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Dense({'in_features': 768, 'out_features': 512, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
alchemab/antiberta2
alchemab
"2024-06-04T14:43:12Z"
3,905
5
transformers
[ "transformers", "safetensors", "roformer", "fill-mask", "biology", "medical", "arxiv:2104.09864", "arxiv:2103.00020", "license:other", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2023-12-02T13:53:06Z"
--- license: other widget: - text: Ḣ Q V Q [MASK] E tags: - biology - medical --- ## AntiBERTa2 🧬 AntiBERTa2 is an antibody-specific language model based on the [RoFormer model](https://arxiv.org/abs/2104.09864) - it is pre-trained using masked language modelling. We also provide a multimodal version of AntiBERTa2, AntiBERTa2-CSSP, that has been trained using a contrastive objective, similar to the [CLIP method](https://arxiv.org/abs/2103.00020). Further details on both AntiBERTa2 and AntiBERTa2-CSSP are described in our [paper](https://www.mlsb.io/papers_2023/Enhancing_Antibody_Language_Models_with_Structural_Information.pdf) accepted at the NeurIPS MLSB Workshop 2023. Both AntiBERTa2 models are only available for non-commercial use. Output antibody sequences (e.g. from infilling via masked language models) can only be used for non-commercial use. For any users seeking commercial use of our model and generated antibodies, please reach out to us at [[email protected]](mailto:[email protected]). | Model variant | Parameters | Config | | ------------- | ---------- | ------ | | [AntiBERTa2](https://huggingface.co/alchemab/antiberta2) | 202M | 16L, 16H, 1024d | | [AntiBERTa2-CSSP](https://huggingface.co/alchemab/antiberta2-cssp) | 202M | 16L, 16H, 1024d | ## Example usage ``` >>> from transformers import ( RoFormerForMaskedLM, RoFormerTokenizer, pipeline, RoFormerForSequenceClassification ) >>> tokenizer = RoFormerTokenizer.from_pretrained("alchemab/antiberta2") >>> model = RoFormerForMaskedLM.from_pretrained("alchemab/antiberta2") >>> filler = pipeline(model=model, tokenizer=tokenizer) >>> filler("Ḣ Q V Q ... C A [MASK] D ... T V S S") # fill in the mask >>> new_model = RoFormerForSequenceClassification.from_pretrained( "alchemab/antiberta2") # this will of course raise warnings # that a new linear layer will be added # and randomly initialized ```
FounderOfHuggingface/gpt2_gen_lora_r16_ag_news_t200_e5_member_shadow30
FounderOfHuggingface
"2023-12-20T15:51:11Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:openai-community/gpt2", "base_model:adapter:openai-community/gpt2", "region:us" ]
null
"2023-12-20T15:51:08Z"
--- library_name: peft base_model: gpt2 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
DaniElAbrazos/q-FrozenLake-v1-4x4-noSlippery
DaniElAbrazos
"2024-04-19T16:15:50Z"
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
"2024-04-19T16:15:48Z"
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="DaniElAbrazos/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
hmteams/flair-hipe-2022-ajmc-en
hmteams
"2023-10-17T21:30:01Z"
10
0
flair
[ "flair", "pytorch", "tensorboard", "token-classification", "sequence-tagger-model", "en", "base_model:hmteams/teams-base-historic-multilingual-discriminator", "base_model:finetune:hmteams/teams-base-historic-multilingual-discriminator", "license:mit", "region:us" ]
token-classification
"2023-10-17T09:43:18Z"
--- language: en license: mit tags: - flair - token-classification - sequence-tagger-model base_model: hmteams/teams-base-historic-multilingual-discriminator widget: - text: Cp . Eur . Phoen . 240 , 1 , αἷμα ddiov φλέγέι . --- # Fine-tuned Flair Model on AjMC English NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [AjMC English](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md) NER Dataset using hmTEAMS as backbone LM. The AjMC dataset consists of NE-annotated historical commentaries in the field of Classics, and was created in the context of the [Ajax MultiCommentary](https://mromanello.github.io/ajax-multi-commentary/) project. The following NEs were annotated: `pers`, `work`, `loc`, `object`, `date` and `scope`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr3e-05 | [0.8606][1] | [0.8657][2] | [0.8612][3] | [0.8609][4] | [0.8623][5] | 86.21 ± 0.19 | | bs8-e10-lr3e-05 | [0.8479][6] | [0.8698][7] | [0.8613][8] | [0.8602][9] | [0.8588][10] | 85.96 ± 0.7 | | bs8-e10-lr5e-05 | [0.8547][11] | [0.8558][12] | [0.8568][13] | [0.865][14] | [0.8633][15] | 85.91 ± 0.42 | | bs4-e10-lr5e-05 | [0.8571][16] | [0.8432][17] | [0.8595][18] | [0.8656][19] | [0.8455][20] | 85.42 ± 0.85 | [1]: https://hf.co/stefan-it/hmbench-ajmc-en-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-ajmc-en-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-ajmc-en-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-ajmc-en-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-ajmc-en-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-ajmc-en-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-ajmc-en-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-ajmc-en-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-ajmc-en-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-ajmc-en-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-ajmc-en-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-ajmc-en-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-ajmc-en-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-ajmc-en-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-ajmc-en-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-ajmc-en-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-ajmc-en-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-ajmc-en-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-ajmc-en-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-ajmc-en-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
lesso14/21ae9663-b5ac-49ca-9fc4-4c2eb378a72b
lesso14
"2025-03-18T23:15:19Z"
0
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:echarlaix/tiny-random-mistral", "base_model:adapter:echarlaix/tiny-random-mistral", "license:apache-2.0", "region:us" ]
null
"2025-03-18T23:11:28Z"
--- library_name: peft license: apache-2.0 base_model: echarlaix/tiny-random-mistral tags: - axolotl - generated_from_trainer model-index: - name: 21ae9663-b5ac-49ca-9fc4-4c2eb378a72b results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: echarlaix/tiny-random-mistral bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 4203a208385b2e15_train_data.json ds_type: json format: custom path: /workspace/input_data/4203a208385b2e15_train_data.json type: field_input: phonemes field_instruction: text field_output: text_description format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null do_eval: true early_stopping_patience: 3 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 500 evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 8 gradient_checkpointing: true group_by_length: true hub_model_id: lesso14/21ae9663-b5ac-49ca-9fc4-4c2eb378a72b hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.000214 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 50 lora_alpha: 128 lora_dropout: 0.15 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_steps: 500 micro_batch_size: 4 mlflow_experiment_name: /tmp/4203a208385b2e15_train_data.json model_type: AutoModelForCausalLM num_epochs: 10 optimizer: adamw_torch_fused output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 500 saves_per_epoch: null seed: 140 sequence_len: 1024 special_tokens: pad_token: </s> strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 08f50b0e-6976-4b33-92a6-13a122249e33 wandb_project: 14a wandb_run: your_name wandb_runid: 08f50b0e-6976-4b33-92a6-13a122249e33 warmup_steps: 100 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 21ae9663-b5ac-49ca-9fc4-4c2eb378a72b This model is a fine-tuned version of [echarlaix/tiny-random-mistral](https://huggingface.co/echarlaix/tiny-random-mistral) on the None dataset. It achieves the following results on the evaluation set: - Loss: 10.0237 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.000214 - train_batch_size: 4 - eval_batch_size: 4 - seed: 140 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 500 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0002 | 1 | 10.3730 | | 80.3639 | 0.1093 | 500 | 10.0237 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
mradermacher/Qwen-modelstock2-15B-GGUF
mradermacher
"2024-10-28T18:13:09Z"
9
1
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:allknowingroger/Qwen-modelstock2-15B", "base_model:quantized:allknowingroger/Qwen-modelstock2-15B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
"2024-10-28T13:05:32Z"
--- base_model: allknowingroger/Qwen-modelstock2-15B language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/allknowingroger/Qwen-modelstock2-15B <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen-modelstock2-15B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Qwen-modelstock2-15B-GGUF/resolve/main/Qwen-modelstock2-15B.Q2_K.gguf) | Q2_K | 5.9 | | | [GGUF](https://huggingface.co/mradermacher/Qwen-modelstock2-15B-GGUF/resolve/main/Qwen-modelstock2-15B.Q3_K_S.gguf) | Q3_K_S | 6.8 | | | [GGUF](https://huggingface.co/mradermacher/Qwen-modelstock2-15B-GGUF/resolve/main/Qwen-modelstock2-15B.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Qwen-modelstock2-15B-GGUF/resolve/main/Qwen-modelstock2-15B.Q3_K_L.gguf) | Q3_K_L | 8.0 | | | [GGUF](https://huggingface.co/mradermacher/Qwen-modelstock2-15B-GGUF/resolve/main/Qwen-modelstock2-15B.IQ4_XS.gguf) | IQ4_XS | 8.3 | | | [GGUF](https://huggingface.co/mradermacher/Qwen-modelstock2-15B-GGUF/resolve/main/Qwen-modelstock2-15B.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen-modelstock2-15B-GGUF/resolve/main/Qwen-modelstock2-15B.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen-modelstock2-15B-GGUF/resolve/main/Qwen-modelstock2-15B.Q5_K_S.gguf) | Q5_K_S | 10.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen-modelstock2-15B-GGUF/resolve/main/Qwen-modelstock2-15B.Q5_K_M.gguf) | Q5_K_M | 10.6 | | | [GGUF](https://huggingface.co/mradermacher/Qwen-modelstock2-15B-GGUF/resolve/main/Qwen-modelstock2-15B.Q6_K.gguf) | Q6_K | 12.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Qwen-modelstock2-15B-GGUF/resolve/main/Qwen-modelstock2-15B.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Sumail/Barista27
Sumail
"2024-04-03T14:11:19Z"
90
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "mergekit", "merge", "conversational", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-04-03T14:09:38Z"
--- base_model: - GamblerOnTrain/fufu001 - GamblerOnTrain/fufu003 library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [GamblerOnTrain/fufu001](https://huggingface.co/GamblerOnTrain/fufu001) * [GamblerOnTrain/fufu003](https://huggingface.co/GamblerOnTrain/fufu003) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: GamblerOnTrain/fufu001 layer_range: [0, 24] - model: GamblerOnTrain/fufu003 layer_range: [0, 24] merge_method: slerp base_model: GamblerOnTrain/fufu001 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```
p1atdev/qwen2.5-0.5b-grpo-math-01
p1atdev
"2025-02-06T16:28:10Z"
7
1
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "ja", "base_model:Qwen/Qwen2.5-0.5B", "base_model:finetune:Qwen/Qwen2.5-0.5B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-02-02T14:52:44Z"
--- license: apache-2.0 language: - ja library_name: transformers base_model: - Qwen/Qwen2.5-0.5B --- 簡単な算数問題を解けるように GRPO で学習してみた。学習コードは下の方にあります。 学習データは簡単な問題なのでその場で合成したものを使いました。(コード参照) prompt format: ``` あなたはアシスタントとして回答します。 ユーザーの質問に対して、<think></think>ブロック内で思考してから<answer></answer>でファイナルアンサーしてください。 具体的には、「<think>ここに思考過程</think><answer>ここに解答</answer>」という形です。 「ユーザー」の質問の後に、「アシスタント」が回答します。 ユーザー: 次の ? に入る数値を計算して回答してください。 {formula} アシスタント: ``` example `formula`: ``` 4 + 3 * 2 = ? ``` expected output: ```xml <think>思考内容</think><answer>解答</answer> ``` ## Example ```py from transformers import pipeline formula = "9 + 3 * 5 = ?" # A + B * C か A * B + C の形式のみ対応 prompt = f"""\ あなたはアシスタントとして回答します。 ユーザーの質問に対して、<think></think>ブロック内で思考してから<answer></answer>でファイナルアンサーしてください。 具体的には、「<think>ここに思考過程</think><answer>ここに解答</answer>」という形です。 「ユーザー」の質問の後に、「アシスタント」が回答します。 ユーザー: 次の ? に入る数値を計算して回答してください。 {formula} アシスタント: """ print(pipe(prompt)[0]["generated_text"][len(prompt):]) # <think>9 + 3 * 5 = 9 + 15 = 24</think><answer>24</answer> ``` ## Training information - Base model: [Qwen/Qwen2.5-0.5B](https://huggingface.co/Qwen/Qwen2.5-0.5B) - Device: 1x A100 80G - GPU Hour: about 1 hour - Total training steps: 140 steps ([the last checkpoint](https://huggingface.co/p1atdev/qwen2.5-0.5b-grpo-math-01/blob/9ede090f5ed41d88c16ffbc56a81b0772f19679e/model.safetensors)) Wandb log: https://wandb.ai/p1atdev/grpo-math-01/runs/ytv8wxll ## Training code ```py import random import re import torch from datasets import Dataset from trl import GRPOConfig, GRPOTrainer from transformers import AutoTokenizer, AutoModelForCausalLM import wandb SYSTEM_PROMPT = """命令: あなたはアシスタントとして回答します。 ユーザーの質問に対して、<think></think>ブロック内で思考してから<answer></answer>でファイナルアンサーしてください。 具体的には、「<think>ここに思考過程</think><answer>ここに解答</answer>」という形です。 「ユーザー」の質問の後に、「アシスタント」が回答します。 ユーザー: """ MODEL_NAME = "Qwen/Qwen2.5-0.5B" def generate_problem(): # written by ChatGPT # 1~10 の間のランダムな整数を3つ生成 a = random.randint(1, 10) b = random.randint(1, 10) c = random.randint(1, 10) # 足し算と掛け算の両方を含むように、2通りのパターンからランダムに選択 if random.randint(0, 1) == 0: # パターン1: 足し算+掛け算 => 例: a + b * c expression = f"{a} + {b} * {c}" else: # パターン2: 掛け算+足し算 => 例: a * b + c expression = f"{a} * {b} + {c}" # Python の eval() を用いて答えを計算(演算子の優先順位に従う) answer = eval(expression) return f"{expression} = ?", answer def generate_random_pair(max_count: int): for i in range(max_count): formula, answer = generate_problem() question = f"""{SYSTEM_PROMPT} 次の ? に入る数値を計算して回答してください。 {formula} アシスタント: """ yield {"id": i, "prompt": question, "ground_truth": answer} # format reward FORMAT_PATTERN = re.compile(r"^<think>.*?</think><answer>.*?</answer>$") def format_reward_func(completions: list[str], **kwargs): """Reward function that checks if the completion has a specific format.""" matches = [FORMAT_PATTERN.match(content) for content in completions] return [1.0 if match else 0.0 for match in matches] # answer reward ANSWER_PATTERN = re.compile(r"<answer>(\d+)</answer>") def answer_reward_func(completions: list[str], ground_truth: list[str], **kwargs): # Regular expression to capture content inside \boxed{} matches = [ANSWER_PATTERN.search(completion) for completion in completions] contents = [match.group(1) if match else "" for match in matches] # Reward 1 if the content is the same as the ground truth, 0 otherwise return [1.0 if c == str(gt) else 0.0 for c, gt in zip(contents, ground_truth)] def main(): ds = Dataset.from_generator(generate_random_pair, gen_kwargs={"max_count": 100000}) # 100000 is too many, we don't need so much for this task model = AutoModelForCausalLM.from_pretrained( MODEL_NAME, attn_implementation="flash_attention_2", torch_dtype=torch.bfloat16, device_map="auto", ) tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME) tokenizer.pad_token = tokenizer.eos_token project_name = YOUR_WANDB_PROJECT_NAME push_hub_name = YOUR_PUSH_HUB_NAME wandb.init(project=project_name) train_args = GRPOConfig( output_dir="./grpo-01", #! output path use_vllm=False, # True to use vLLM overwrite_output_dir=True, num_train_epochs=10, num_generations=4, per_device_train_batch_size=16, # per_device_eval_batch_size=4, gradient_accumulation_steps=1, gradient_checkpointing=True, learning_rate=1e-4, # maybe a bit high warmup_ratio=0.01, weight_decay=0.01, optim="adamw_8bit", adam_epsilon=1e-8, lr_scheduler_type="cosine_with_min_lr", lr_scheduler_kwargs={ "min_lr": 5e-5, "num_cycles": 0.5, }, # eval_strategy="steps", # eval did not work well # eval_steps=10, save_steps=10, save_total_limit=2, logging_steps=1, logging_first_step=True, # load_best_model_at_end=True, # metric_for_best_model="eval_loss", torch_compile=False, # compile does not work fp16=False, bf16=True, report_to=["wandb"], hub_model_id=push_hub_name, hub_private_repo=True, push_to_hub=True, save_safetensors=True, ) trainer = GRPOTrainer( model=model, processing_class=tokenizer, train_dataset=ds, # eval_dataset=ds["test"], reward_funcs=[format_reward_func, answer_reward_func], args=train_args, ) trainer.train() if __name__ == "__main__": main() ```
isspek/bert-base-cased_zika_mistral_4_2e-5_16_weight
isspek
"2025-02-23T22:22:19Z"
0
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2025-02-23T22:22:04Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
tensorblock/Arch-Function-1.5B-GGUF
tensorblock
"2024-11-27T00:11:29Z"
6
1
transformers
[ "transformers", "gguf", "TensorBlock", "GGUF", "text-generation", "en", "base_model:katanemo/Arch-Function-1.5B", "base_model:quantized:katanemo/Arch-Function-1.5B", "license:other", "endpoints_compatible", "region:us", "conversational" ]
text-generation
"2024-11-27T00:06:37Z"
--- license: other license_name: katanemo-research license_link: https://huggingface.co/katanemolabs/Arch-Function-1.5B/blob/main/LICENSE base_model: katanemo/Arch-Function-1.5B language: - en pipeline_tag: text-generation library_name: transformers tags: - TensorBlock - GGUF --- <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"> Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a> </p> </div> </div> ## katanemo/Arch-Function-1.5B - GGUF This repo contains GGUF format model files for [katanemo/Arch-Function-1.5B](https://huggingface.co/katanemo/Arch-Function-1.5B). The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d). <div style="text-align: left; margin: 20px 0;"> <a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;"> Run them on the TensorBlock client using your local machine ↗ </a> </div> ## Prompt template ``` <|im_start|>system {system_prompt}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` ## Model file specification | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [Arch-Function-1.5B-Q2_K.gguf](https://huggingface.co/tensorblock/Arch-Function-1.5B-GGUF/blob/main/Arch-Function-1.5B-Q2_K.gguf) | Q2_K | 0.676 GB | smallest, significant quality loss - not recommended for most purposes | | [Arch-Function-1.5B-Q3_K_S.gguf](https://huggingface.co/tensorblock/Arch-Function-1.5B-GGUF/blob/main/Arch-Function-1.5B-Q3_K_S.gguf) | Q3_K_S | 0.761 GB | very small, high quality loss | | [Arch-Function-1.5B-Q3_K_M.gguf](https://huggingface.co/tensorblock/Arch-Function-1.5B-GGUF/blob/main/Arch-Function-1.5B-Q3_K_M.gguf) | Q3_K_M | 0.824 GB | very small, high quality loss | | [Arch-Function-1.5B-Q3_K_L.gguf](https://huggingface.co/tensorblock/Arch-Function-1.5B-GGUF/blob/main/Arch-Function-1.5B-Q3_K_L.gguf) | Q3_K_L | 0.880 GB | small, substantial quality loss | | [Arch-Function-1.5B-Q4_0.gguf](https://huggingface.co/tensorblock/Arch-Function-1.5B-GGUF/blob/main/Arch-Function-1.5B-Q4_0.gguf) | Q4_0 | 0.935 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [Arch-Function-1.5B-Q4_K_S.gguf](https://huggingface.co/tensorblock/Arch-Function-1.5B-GGUF/blob/main/Arch-Function-1.5B-Q4_K_S.gguf) | Q4_K_S | 0.940 GB | small, greater quality loss | | [Arch-Function-1.5B-Q4_K_M.gguf](https://huggingface.co/tensorblock/Arch-Function-1.5B-GGUF/blob/main/Arch-Function-1.5B-Q4_K_M.gguf) | Q4_K_M | 0.986 GB | medium, balanced quality - recommended | | [Arch-Function-1.5B-Q5_0.gguf](https://huggingface.co/tensorblock/Arch-Function-1.5B-GGUF/blob/main/Arch-Function-1.5B-Q5_0.gguf) | Q5_0 | 1.099 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [Arch-Function-1.5B-Q5_K_S.gguf](https://huggingface.co/tensorblock/Arch-Function-1.5B-GGUF/blob/main/Arch-Function-1.5B-Q5_K_S.gguf) | Q5_K_S | 1.099 GB | large, low quality loss - recommended | | [Arch-Function-1.5B-Q5_K_M.gguf](https://huggingface.co/tensorblock/Arch-Function-1.5B-GGUF/blob/main/Arch-Function-1.5B-Q5_K_M.gguf) | Q5_K_M | 1.125 GB | large, very low quality loss - recommended | | [Arch-Function-1.5B-Q6_K.gguf](https://huggingface.co/tensorblock/Arch-Function-1.5B-GGUF/blob/main/Arch-Function-1.5B-Q6_K.gguf) | Q6_K | 1.273 GB | very large, extremely low quality loss | | [Arch-Function-1.5B-Q8_0.gguf](https://huggingface.co/tensorblock/Arch-Function-1.5B-GGUF/blob/main/Arch-Function-1.5B-Q8_0.gguf) | Q8_0 | 1.647 GB | very large, extremely low quality loss - not recommended | ## Downloading instruction ### Command line Firstly, install Huggingface Client ```shell pip install -U "huggingface_hub[cli]" ``` Then, downoad the individual model file the a local directory ```shell huggingface-cli download tensorblock/Arch-Function-1.5B-GGUF --include "Arch-Function-1.5B-Q2_K.gguf" --local-dir MY_LOCAL_DIR ``` If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try: ```shell huggingface-cli download tensorblock/Arch-Function-1.5B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf' ```
Zoyd/NousResearch_Hermes-2-Theta-Llama-3-8B-3_75bpw_exl2
Zoyd
"2024-05-26T10:11:12Z"
5
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "Llama-3", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "synthetic data", "distillation", "function calling", "json mode", "axolotl", "merges", "conversational", "en", "dataset:teknium/OpenHermes-2.5", "base_model:NousResearch/Hermes-2-Pro-Llama-3-8B", "base_model:quantized:NousResearch/Hermes-2-Pro-Llama-3-8B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "exl2", "region:us" ]
text-generation
"2024-05-26T09:24:38Z"
--- base_model: NousResearch/Hermes-2-Pro-Llama-3-8B tags: - Llama-3 - instruct - finetune - chatml - DPO - RLHF - gpt4 - synthetic data - distillation - function calling - json mode - axolotl - merges model-index: - name: Hermes-2-Pro-Llama-3-Instruct-8B-Merge results: [] language: - en datasets: - teknium/OpenHermes-2.5 widget: - example_title: Hermes 2 Pro Llama-3 Instruct Merge messages: - role: system content: >- You are a sentient, superintelligent artificial general intelligence, here to teach and assist me. - role: user content: >- Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world. license: apache-2.0 --- **Exllamav2** quant (**exl2** / **3.75 bpw**) made with ExLlamaV2 v0.0.21 Other EXL2 quants: | **Quant** | **Model Size** | **lm_head** | | ----- | ---------- | ------- | |<center>**[2.2](https://huggingface.co/Zoyd/NousResearch_Hermes-2-Theta-Llama-3-8B-2_2bpw_exl2)**</center> | <center>3250 MB</center> | <center>6</center> | |<center>**[2.5](https://huggingface.co/Zoyd/NousResearch_Hermes-2-Theta-Llama-3-8B-2_5bpw_exl2)**</center> | <center>3479 MB</center> | <center>6</center> | |<center>**[3.0](https://huggingface.co/Zoyd/NousResearch_Hermes-2-Theta-Llama-3-8B-3_0bpw_exl2)**</center> | <center>3895 MB</center> | <center>6</center> | |<center>**[3.5](https://huggingface.co/Zoyd/NousResearch_Hermes-2-Theta-Llama-3-8B-3_5bpw_exl2)**</center> | <center>4311 MB</center> | <center>6</center> | |<center>**[3.75](https://huggingface.co/Zoyd/NousResearch_Hermes-2-Theta-Llama-3-8B-3_75bpw_exl2)**</center> | <center>4519 MB</center> | <center>6</center> | |<center>**[4.0](https://huggingface.co/Zoyd/NousResearch_Hermes-2-Theta-Llama-3-8B-4_0bpw_exl2)**</center> | <center>4726 MB</center> | <center>6</center> | |<center>**[4.25](https://huggingface.co/Zoyd/NousResearch_Hermes-2-Theta-Llama-3-8B-4_25bpw_exl2)**</center> | <center>4933 MB</center> | <center>6</center> | |<center>**[5.0](https://huggingface.co/Zoyd/NousResearch_Hermes-2-Theta-Llama-3-8B-5_0bpw_exl2)**</center> | <center>5558 MB</center> | <center>6</center> | |<center>**[6.0](https://huggingface.co/Zoyd/NousResearch_Hermes-2-Theta-Llama-3-8B-6_0bpw_exl2)**</center> | <center>6497 MB</center> | <center>8</center> | |<center>**[6.5](https://huggingface.co/Zoyd/NousResearch_Hermes-2-Theta-Llama-3-8B-6_5bpw_exl2)**</center> | <center>6910 MB</center> | <center>8</center> | |<center>**[8.0](https://huggingface.co/Zoyd/NousResearch_Hermes-2-Theta-Llama-3-8B-8_0bpw_exl2)**</center> | <center>8049 MB</center> | <center>8</center> | # - Hermes-2 Θ Llama-3 8B ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/HQnQmNM1L3KXGhp0wUzHH.png) ## Model Description Hermes-2 Θ (Theta) is the first experimental merged model released by [Nous Research](https://nousresearch.com/), in collaboration with Charles Goddard at [Arcee](https://www.arcee.ai/), the team behind MergeKit. Hermes-2 Θ is a merged and then further RLHF'ed version our excellent Hermes 2 Pro model and Meta's Llama-3 Instruct model to form a new model, Hermes-2 Θ, combining the best of both worlds of each model. ## Example Outputs ### Create New Mythos: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/dvKhnSvHdx4nTQIqB9Lpv.png) ### Chat with a Meta-Cognitive Entity ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/GwdCqowE6GQylineqehhx.png) ### Ask for a structured JSON output: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/us72aL9gwUXdqSHetRVRV.png) # Prompt Format Hermes 2 Θ uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue. System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model. This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns. This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI. Prompt with system instruction (Use whatever system prompt you like, this is just an example!): ``` <|im_start|>system You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|> <|im_start|>user Hello, who are you?<|im_end|> <|im_start|>assistant Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|> ``` This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the `tokenizer.apply_chat_template()` method: ```python messages = [ {"role": "system", "content": "You are Hermes 2."}, {"role": "user", "content": "Hello, who are you?"} ] gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt") model.generate(**gen_input) ``` When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure that the model continues with an assistant response. To utilize the prompt format without a system prompt, simply leave the line out. ## Prompt Format for Function Calling Our model was trained on specific system prompts and structures for Function Calling. While the system prompt looks complicated, we have created a GitHub repo containing code to easily build these based on real python functions. You should use the system role with this message, followed by a function signature json as this example shows here. ``` <|im_start|>system You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools: <tools> {"type": "function", "function": {"name": "get_stock_fundamentals", "description": "get_stock_fundamentals(symbol: str) -> dict - Get fundamental data for a given stock symbol using yfinance API.\\n\\n Args:\\n symbol (str): The stock symbol.\\n\\n Returns:\\n dict: A dictionary containing fundamental data.\\n Keys:\\n - \'symbol\': The stock symbol.\\n - \'company_name\': The long name of the company.\\n - \'sector\': The sector to which the company belongs.\\n - \'industry\': The industry to which the company belongs.\\n - \'market_cap\': The market capitalization of the company.\\n - \'pe_ratio\': The forward price-to-earnings ratio.\\n - \'pb_ratio\': The price-to-book ratio.\\n - \'dividend_yield\': The dividend yield.\\n - \'eps\': The trailing earnings per share.\\n - \'beta\': The beta value of the stock.\\n - \'52_week_high\': The 52-week high price of the stock.\\n - \'52_week_low\': The 52-week low price of the stock.", "parameters": {"type": "object", "properties": {"symbol": {"type": "string"}}, "required": ["symbol"]}}} </tools> Use the following pydantic model json schema for each tool call you will make: {"properties": {"arguments": {"title": "Arguments", "type": "object"}, "name": {"title": "Name", "type": "string"}}, "required": ["arguments", "name"], "title": "FunctionCall", "type": "object"} For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows: <tool_call> {"arguments": <args-dict>, "name": <function-name>} </tool_call><|im_end|> ``` To complete the function call, create a user prompt that follows the above system prompt, like so: ``` <|im_start|>user Fetch the stock fundamentals data for Tesla (TSLA)<|im_end|> ``` The model will then generate a tool call, which your inference code must parse, and plug into a function (see example inference code here: https://github.com/NousResearch/Hermes-Function-Calling): ``` <|im_start|>assistant <tool_call> {"arguments": {"symbol": "TSLA"}, "name": "get_stock_fundamentals"} </tool_call><|im_end|> ``` Once you parse the tool call, call the api and get the returned values for the call, and pass it back in as a new role, `tool` like so: ``` <|im_start|>tool <tool_response> {"name": "get_stock_fundamentals", "content": {'symbol': 'TSLA', 'company_name': 'Tesla, Inc.', 'sector': 'Consumer Cyclical', 'industry': 'Auto Manufacturers', 'market_cap': 611384164352, 'pe_ratio': 49.604652, 'pb_ratio': 9.762013, 'dividend_yield': None, 'eps': 4.3, 'beta': 2.427, '52_week_high': 299.29, '52_week_low': 152.37}} </tool_response> <|im_end|> ``` The assistant will then read in that data from the function's response, and generate a natural language response: ``` <|im_start|>assistant The stock fundamentals data for Tesla (TSLA) are as follows: - **Symbol**: TSLA - **Company Name**: Tesla, Inc. - **Sector**: Consumer Cyclical - **Industry**: Auto Manufacturers - **Market Capitalization**: $566,160,130,480 - **Forward Price-to-Earnings Ratio (PE Ratio)**: 42.73 - **Price-to-Book Ratio (PB Ratio)**: 9.04 - **Dividend Yield**: N/A - **Trailing Earnings Per Share (EPS)**: $4.3 - **Beta Value of the Stock**: 2.42 - **52-Week High Price of the Stock**: $299.29 - **52-Week Low Price of the Stock**: $152.37 This information provides a snapshot of Tesla's financial position and performance based on the fundamental data obtained from the yfinance API. It shows that Tesla has a substantial market capitalization and a relatively high P/E and P/B ratio compared to other stocks in its industry. The company does not pay a dividend at the moment, which is reflected by a 'Dividend Yield' of 'None'. The Beta value indicates that Tesla's stock has a moderate level of volatility relative to the market. The 52-week high and low prices give an idea of the stock's range over the past year. This data can be useful when assessing investment opportunities and making investment decisions.<|im_end|> ``` ## Prompt Format for JSON Mode / Structured Outputs Our model was also trained on a specific system prompt for Structured Outputs, which should respond with **only** a json object response, in a specific json schema. Your schema can be made from a pydantic object using our codebase, with the standalone script `jsonmode.py` available here: https://github.com/NousResearch/Hermes-Function-Calling/tree/main ``` <|im_start|>system You are a helpful assistant that answers in JSON. Here's the json schema you must adhere to:\n<schema>\n{schema}\n</schema><|im_end|> ``` Given the {schema} that you provide, it should follow the format of that json to create it's response, all you have to do is give a typical user prompt, and it will respond in JSON. # Benchmarks ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/suBbCUIxpcRvhCv6-DBDQ.png) ## GPT4All: ``` |    Task     |Version| Metric |Value |   |Stderr| |-------------|------:|--------|-----:|---|-----:| |arc_challenge|      0|acc     |0.5529|±  |0.0145| |             |       |acc_norm|0.5870|±  |0.0144| |arc_easy     |      0|acc     |0.8371|±  |0.0076| |             |       |acc_norm|0.8144|±  |0.0080| |boolq        |      1|acc     |0.8599|±  |0.0061| |hellaswag    |      0|acc     |0.6133|±  |0.0049| |             |       |acc_norm|0.7989|±  |0.0040| |openbookqa   |      0|acc     |0.3940|±  |0.0219| |             |       |acc_norm|0.4680|±  |0.0223| |piqa         |      0|acc     |0.8063|±  |0.0092| |             |       |acc_norm|0.8156|±  |0.0090| |winogrande   |      0|acc     |0.7372|±  |0.0124| ``` Average: 72.59 ## AGIEval: ``` |             Task             |Version| Metric |Value |   |Stderr| |------------------------------|------:|--------|-----:|---|-----:| |agieval_aqua_rat              |      0|acc     |0.2441|±  |0.0270| |                              |       |acc_norm|0.2441|±  |0.0270| |agieval_logiqa_en             |      0|acc     |0.3687|±  |0.0189| |                              |       |acc_norm|0.3840|±  |0.0191| |agieval_lsat_ar               |      0|acc     |0.2304|±  |0.0278| |                              |       |acc_norm|0.2174|±  |0.0273| |agieval_lsat_lr               |      0|acc     |0.5471|±  |0.0221| |                              |       |acc_norm|0.5373|±  |0.0221| |agieval_lsat_rc               |      0|acc     |0.6617|±  |0.0289| |                              |       |acc_norm|0.6357|±  |0.0294| |agieval_sat_en                |      0|acc     |0.7670|±  |0.0295| |                              |       |acc_norm|0.7379|±  |0.0307| |agieval_sat_en_without_passage|      0|acc     |0.4417|±  |0.0347| |                              |       |acc_norm|0.4223|±  |0.0345| |agieval_sat_math              |      0|acc     |0.4000|±  |0.0331| |                              |       |acc_norm|0.3455|±  |0.0321| ``` Average: 44.05 ## BigBench: ``` |                      Task                      |Version|       Metric        |Value |   |Stderr| |------------------------------------------------|------:|---------------------|-----:|---|-----:| |bigbench_causal_judgement                       |      0|multiple_choice_grade|0.6000|±  |0.0356| |bigbench_date_understanding                     |      0|multiple_choice_grade|0.6585|±  |0.0247| |bigbench_disambiguation_qa                      |      0|multiple_choice_grade|0.3178|±  |0.0290| |bigbench_geometric_shapes                       |      0|multiple_choice_grade|0.2340|±  |0.0224| |                                                |       |exact_str_match      |0.0000|±  |0.0000| |bigbench_logical_deduction_five_objects         |      0|multiple_choice_grade|0.2980|±  |0.0205| |bigbench_logical_deduction_seven_objects        |      0|multiple_choice_grade|0.2057|±  |0.0153| |bigbench_logical_deduction_three_objects        |      0|multiple_choice_grade|0.5367|±  |0.0288| |bigbench_movie_recommendation                   |      0|multiple_choice_grade|0.4040|±  |0.0220| |bigbench_navigate                               |      0|multiple_choice_grade|0.4970|±  |0.0158| |bigbench_reasoning_about_colored_objects        |      0|multiple_choice_grade|0.7075|±  |0.0102| |bigbench_ruin_names                             |      0|multiple_choice_grade|0.4821|±  |0.0236| |bigbench_salient_translation_error_detection    |      0|multiple_choice_grade|0.2295|±  |0.0133| |bigbench_snarks                                 |      0|multiple_choice_grade|0.6906|±  |0.0345| |bigbench_sports_understanding                   |      0|multiple_choice_grade|0.5375|±  |0.0159| |bigbench_temporal_sequences                     |      0|multiple_choice_grade|0.6270|±  |0.0153| |bigbench_tracking_shuffled_objects_five_objects |      0|multiple_choice_grade|0.2216|±  |0.0118| |bigbench_tracking_shuffled_objects_seven_objects|      0|multiple_choice_grade|0.1594|±  |0.0088| |bigbench_tracking_shuffled_objects_three_objects|      0|multiple_choice_grade|0.5367|±  |0.0288| ``` Average: 44.13 **IFEval**: 72.64 **MT_Bench**: Turn 1 - 8.3875, Turn 2 - 8.00625, Average - 8.196875 # Inference Code Here is example code using HuggingFace Transformers to inference the model (note: in 4bit, it will require around 5GB of VRAM) Note: To use function calling, you should see the github repo above. ```python # Code to inference Hermes with HF Transformers # Requires pytorch, transformers, bitsandbytes, sentencepiece, protobuf, and flash-attn packages import torch from transformers import AutoTokenizer, AutoModelForCausalLM, LlamaForCausalLM import bitsandbytes, flash_attn tokenizer = AutoTokenizer.from_pretrained('NousResearch/Hermes-2-Theta-Llama-3-8B', trust_remote_code=True) model = LlamaForCausalLM.from_pretrained( "NousResearch/Hermes-2-Theta-Llama-3-8B", torch_dtype=torch.float16, device_map="auto", load_in_8bit=False, load_in_4bit=True, use_flash_attention_2=True ) prompts = [ """<|im_start|>system You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|> <|im_start|>user Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.<|im_end|> <|im_start|>assistant""", ] for chat in prompts: print(chat) input_ids = tokenizer(chat, return_tensors="pt").input_ids.to("cuda") generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id) response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True) print(f"Response: {response}") ``` ## Inference Code for Function Calling: All code for utilizing, parsing, and building function calling templates is available on our github: [https://github.com/NousResearch/Hermes-Function-Calling](https://github.com/NousResearch/Hermes-Function-Calling) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/oi4CiGh50xmoviUQnh8R3.png) # Chat Interfaces When quantized versions of the model are released, I recommend using LM Studio for chatting with Hermes 2 Pro. It does not support function calling - for that use our github repo. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box. In LM-Studio, simply select the ChatML Prefix on the settings side pane: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/ls6WqV-GSxMw2RA3GuQiN.png) ## Quantized Versions: GGUF Versions Available Here: https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B-GGUF # How to cite: ```bibtext @misc{Hermes-2-Theta-Llama-3-8B, url={[https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B][NousResearch/Hermes-2-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B))}, title={Hermes-2-Theta-Llama-3-8B}, author={"Teknium", Charles Goddard, "interstellarninja", "theemozilla", "karan4d", "huemin_art"} } ```
nkpz/open_llama_7b_qlora_uncensored-gptq
nkpz
"2023-07-06T08:47:29Z"
5
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2023-07-06T08:32:58Z"
--- license: apache-2.0 --- 4-bit quantized files for [georgesung/open_llama_7b_qlora_uncensored](https://huggingface.co/georgesung/open_llama_7b_qlora_uncensored) Quantized using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa). Command used to quantize: python llama.py /my/model/directory c4 --wbits 4 --true-sequential --act-order --save_safetensors /my/output/file.safetensors
pandihao/test
pandihao
"2025-03-27T06:37:14Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:pandihao/test", "base_model:adapter:pandihao/test", "region:us" ]
null
"2025-03-27T06:20:41Z"
--- base_model: pandihao/test library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.0
marianokamp/ppo-LunarLander-v2
marianokamp
"2023-01-04T16:45:55Z"
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
"2023-01-04T16:21:33Z"
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 278.09 +/- 23.21 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Yntec/XenoGASM-MK2
Yntec
"2024-11-20T16:27:19Z"
2,187
2
diffusers
[ "diffusers", "safetensors", "iamxenos", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "base_model:Yntec/XenoGASM", "base_model:finetune:Yntec/XenoGASM", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2024-11-20T14:10:43Z"
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image tags: - iamxenos - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers base_model: - Yntec/XenoGASM --- # XenoGASM Mark II This model with the "'Attention' object has no attribute 'to_to_k'" error fixed (check https://huggingface.co/Yntec/XenoGASM-MK2/discussions/2 for more details) so it can be used in diffusers and GUIs like EasyDiffusion. It includes a version with the 840K VAE baked in for improved details. Samples and prompts (all use seed 9119): ![Anthropomorphic ducks animals wearing red dress around a cornucopia of vegetables, portrait retro eyes, tuxedos oil on canvas. steven belledi. Chibi. Midjourney prompts](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/aJRPvLW6j5eKlVezX3awa.png) Anthropomorphic ducks animals wearing red dress around a cornucopia of vegetables, portrait retro eyes, tuxedos oil on canvas. steven belledi. Chibi ![a cartoon cup of coffee, by Chris LaBrooy, (snowy new york in background), 8 k highly detailed ❤🔥 🔥 🤖 🚀, isometric beautiful Arabic table with empty plates with traces of food in style. year 2 0 2 2, in bladerunner city, q posket, in times square, starbucks, photo from 2022, cold as ice! 🧊 . Flux pro prompts](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/2Zne3lXYyuAa1PJJQ_qR1.png) a cartoon cup of coffee, by Chris LaBrooy, (snowy new york in background), 8 k highly detailed ❤🔥 🔥 🤖 🚀, isometric beautiful Arabic table with empty plates with traces of food in style. year 2 0 2 2, in bladerunner city, q posket, in times square, starbucks, photo from 2022, cold as ice! 🧊 ![90s movie still blue red teal colored beautiful vintage magazine photograph of paladin on Mars playing chess. Mushroom kingdom tall, round hills, presence of floating island. objects such as ? Blocks and coins, Warp Pipes, striped grasslands, and layered dirt. AI prompts](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/0NzDsFmfea8BGGYQ3GRVw.png) 90s movie still blue red teal colored beautiful vintage magazine photograph of paladin on Mars playing chess. Mushroom kingdom tall, round hills, presence of floating island. objects such as ? Blocks and coins, Warp Pipes, striped grasslands, and layered dirt. ![Materialcollage Helicopter, plastic toys on the table. dungeons and dragons campaign board game in a dream flower garden near a seaside spaceport environment where one draws mystical energy into their lives, fireflies, background art, pristine concept art, small, medium and large design elements, golden hour, in the style of wlop and ross tran. Best free online image generators](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/ejYEjBpMswnZDGHf64yOi.png) Materialcollage Helicopter, plastic toys on the table. dungeons and dragons campaign board game in a dream flower garden near a seaside spaceport environment where one draws mystical energy into their lives, fireflies, background art, pristine concept art, small, medium and large design elements, golden hour, in the style of wlop and ross tran
mradermacher/sarashina-13B-finetuned-v3-GGUF
mradermacher
"2025-04-02T15:53:00Z"
0
0
transformers
[ "transformers", "gguf", "unsloth", "trl", "sft", "en", "base_model:Mori-kamiyama/sarashina-13B-finetuned-v3", "base_model:quantized:Mori-kamiyama/sarashina-13B-finetuned-v3", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-04-02T04:56:58Z"
--- base_model: Mori-kamiyama/sarashina-13B-finetuned-v3 language: - en library_name: transformers license: mit quantized_by: mradermacher tags: - unsloth - trl - sft --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Mori-kamiyama/sarashina-13B-finetuned-v3 <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/sarashina-13B-finetuned-v3-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/sarashina-13B-finetuned-v3-GGUF/resolve/main/sarashina-13B-finetuned-v3.Q2_K.gguf) | Q2_K | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/sarashina-13B-finetuned-v3-GGUF/resolve/main/sarashina-13B-finetuned-v3.Q3_K_S.gguf) | Q3_K_S | 6.2 | | | [GGUF](https://huggingface.co/mradermacher/sarashina-13B-finetuned-v3-GGUF/resolve/main/sarashina-13B-finetuned-v3.Q3_K_M.gguf) | Q3_K_M | 6.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/sarashina-13B-finetuned-v3-GGUF/resolve/main/sarashina-13B-finetuned-v3.Q3_K_L.gguf) | Q3_K_L | 7.5 | | | [GGUF](https://huggingface.co/mradermacher/sarashina-13B-finetuned-v3-GGUF/resolve/main/sarashina-13B-finetuned-v3.IQ4_XS.gguf) | IQ4_XS | 7.6 | | | [GGUF](https://huggingface.co/mradermacher/sarashina-13B-finetuned-v3-GGUF/resolve/main/sarashina-13B-finetuned-v3.Q4_K_S.gguf) | Q4_K_S | 8.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/sarashina-13B-finetuned-v3-GGUF/resolve/main/sarashina-13B-finetuned-v3.Q4_K_M.gguf) | Q4_K_M | 8.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/sarashina-13B-finetuned-v3-GGUF/resolve/main/sarashina-13B-finetuned-v3.Q5_K_S.gguf) | Q5_K_S | 9.6 | | | [GGUF](https://huggingface.co/mradermacher/sarashina-13B-finetuned-v3-GGUF/resolve/main/sarashina-13B-finetuned-v3.Q5_K_M.gguf) | Q5_K_M | 9.9 | | | [GGUF](https://huggingface.co/mradermacher/sarashina-13B-finetuned-v3-GGUF/resolve/main/sarashina-13B-finetuned-v3.Q6_K.gguf) | Q6_K | 11.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/sarashina-13B-finetuned-v3-GGUF/resolve/main/sarashina-13B-finetuned-v3.Q8_0.gguf) | Q8_0 | 14.7 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
TiepPhoiAI/NgocLinhAI
TiepPhoiAI
"2025-03-29T23:34:23Z"
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "region:us" ]
text-to-image
"2025-03-29T23:29:22Z"
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: '-' output: url: images/ChatGPT Image 13_30_37 29 thg 3, 2025.png - text: '-' output: url: images/z6367322156184_4e53847b9b4f89d827e4e760330a0242.jpg base_model: black-forest-labs/FLUX.1-dev instance_prompt: TiepphoiAI, hot boy face Ngoc Linh, Vietnamese --- # Lora KOL Ngoc Linh <Gallery /> ## Model description ![NGOC LINH (39).jpg](https:&#x2F;&#x2F;cdn-uploads.huggingface.co&#x2F;production&#x2F;uploads&#x2F;67e8812b8872eaaf8712417e&#x2F;ZZwbzju_gdU187j4U2b6X.jpeg) ## Trigger words You should use `TiepphoiAI` to trigger the image generation. You should use `hot boy face Ngoc Linh` to trigger the image generation. You should use `Vietnamese` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/TiepPhoiAI/NgocLinhAI/tree/main) them in the Files & versions tab.
LHRuig/douglaslimsx
LHRuig
"2025-03-25T21:01:47Z"
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "region:us" ]
text-to-image
"2025-03-25T21:01:23Z"
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: suit output: url: images/suit.jpg base_model: black-forest-labs/FLUX.1-dev instance_prompt: douglaslimsx --- # douglaslimsx <Gallery /> ## Model description douglaslimsx lora ## Trigger words You should use `douglaslimsx` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/LHRuig/douglaslimsx/tree/main) them in the Files & versions tab.
Ogamon/llama2_truth_model
Ogamon
"2024-07-10T15:24:06Z"
18
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "llama-factory", "full", "generated_from_trainer", "conversational", "base_model:meta-llama/Llama-2-7b-hf", "base_model:finetune:meta-llama/Llama-2-7b-hf", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-07-10T15:02:58Z"
--- license: other base_model: meta-llama/Llama-2-7b-hf tags: - llama-factory - full - generated_from_trainer model-index: - name: train_2024-07-10-09-28-39 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # train_2024-07-10-09-28-39 This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the truth_train dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 8 - total_train_batch_size: 128 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 600 - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.42.3 - Pytorch 2.3.0a0+ebedce2 - Datasets 2.20.0 - Tokenizers 0.19.1
VietnamAIHub/VietCoMath-o1-8B
VietnamAIHub
"2025-01-21T09:00:16Z"
10
0
null
[ "safetensors", "llama", "doi:10.57967/hf/3743", "region:us" ]
null
"2024-12-05T13:33:47Z"
# VietCoMath Model Usage ## Overview This example snipe code for running the VietCoMath-01 small model for mathematical Coding problem-solving and General Multi tasks. This model is based from LLama 3.0 8B. #### Helper Functions ```python import re def check_patterns(response): """ Check if the response contains all required XML patterns. Args: response (str): The model's generated response Returns: str: Parsed response or 'Missing' if patterns are incomplete """ patterns = { 'answer': r'<answer>(.*?)</answer>', 'reflection': r'<reflection>(.*?)</reflection>', 'steps': r'<step>(.*?)</step>', 'count': r'<count>(.*?)</count>' } matches = { 'answer': re.search(patterns['answer'], response, re.DOTALL), 'reflection': re.search(patterns['reflection'], response, re.DOTALL), 'steps': re.findall(patterns['steps'], response, re.DOTALL), 'count': re.findall(patterns['count'], response, re.DOTALL) } return "Missing" if not all([matches['answer'], matches['reflection'], matches['steps'], matches['count']]) else response def parse_response(response): """ Parse the model's response and extract key components. Args: response (str): The model's generated response Returns: tuple: Parsed answer, reflection, steps, and clarification """ response_check = check_patterns(response) if response_check == "Missing": clarification_match = re.search(r'<clarification>(.*?)</clarification>', response, re.DOTALL) clarification = clarification_match.group(1).strip() if clarification_match else response return "", "", [], clarification else: answer_match = re.search(r'<answer>(.*?)</answer>', response, re.DOTALL) reflection_match = re.search(r'<reflection>(.*?)</reflection>', response, re.DOTALL) answer = answer_match.group(1).strip() if answer_match else "" reflection = reflection_match.group(1).strip() if reflection_match else "" steps = re.findall(r'<step>(.*?)</step>', response, re.DOTALL) return answer, reflection, steps, "" ``` ## Usage ### Basic Text Generation ```python import transformers import torch # Load the model model_id = "VietnamAIHub/VietCoMath-o1-8B" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto", ) # Example mathematical word problem problem = "Có 100 sinh viên đỗ đại học. Trong số đó, có 55 sinh viên chọn âm nhạc, 44 sinh viên chọn thể thao, và 20 sinh viên chọn cả 2. Hỏi có bao nhiêu sinh viên không chọn âm nhạc, cũng không chọn thể thao?" # Prepare messages messages = [ {"role": "system", "content": ""}, {"role": "user", "content": f"{problem}"}, ] # Define terminators terminators = [ pipeline.tokenizer.eos_token_id, pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>") ] # Generate text outputs = pipeline( messages, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9, ) # Print generated text generated_text=outputs[0]["generated_text"][-1] answer, reflection, steps, clarification = parse_response(generated_text) print(clarification) print("------------Internal Thinking-------------") print(steps) print(reflection) print("------------End of Internal Thinking-------------\n") print("------------Final Answer-------------") print(answer) print("------------End of Answer-------------") ## Limitations - The model is Small scale May Failed in Very difficult problems, Please check the result ## License [Model is based LLama 3B] ## Citation @misc {VietnamAIHub, author = { {VietnamAIHub} }, title = { VietCoMath-o1-8B}, year = 2024, url = { https://huggingface.co/VietnamAIHub/VietCoMath-o1-8B }, doi = { 10.57967/hf/3743 }, publisher = { Hugging Face } } ## Collaboration & Contribution Bạn có thể kết nối trực tiếp với Trần Nhiệm [email protected] Hoặc có thể chat trực tiếp ở: LinkedIn Facebook. X. Zalo +886 934 311 751
ambet/mistral-v0.2_robot_lora_dict
ambet
"2024-02-13T17:00:13Z"
8
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-02-13T16:49:34Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
solidrust/dolphin-2.9.1-llama-3-8b-AWQ
solidrust
"2024-09-03T08:49:53Z"
82
2
transformers
[ "transformers", "safetensors", "llama", "text-generation", "4-bit", "AWQ", "autotrain_compatible", "generated_from_trainer", "axolotl", "endpoints_compatible", "conversational", "dataset:cognitivecomputations/Dolphin-2.9", "dataset:teknium/OpenHermes-2.5", "dataset:m-a-p/CodeFeedback-Filtered-Instruction", "dataset:cognitivecomputations/dolphin-coder", "dataset:cognitivecomputations/samantha-data", "dataset:microsoft/orca-math-word-problems-200k", "dataset:Locutusque/function-calling-chatml", "dataset:internlm/Agent-FLAN", "base_model:cognitivecomputations/dolphin-2.9.1-llama-3-8b", "base_model:quantized:cognitivecomputations/dolphin-2.9.1-llama-3-8b", "license:other", "text-generation-inference", "awq", "region:us" ]
text-generation
"2024-05-11T03:15:50Z"
--- base_model: cognitivecomputations/dolphin-2.9.1-llama-3-8b datasets: - cognitivecomputations/Dolphin-2.9 - teknium/OpenHermes-2.5 - m-a-p/CodeFeedback-Filtered-Instruction - cognitivecomputations/dolphin-coder - cognitivecomputations/samantha-data - microsoft/orca-math-word-problems-200k - Locutusque/function-calling-chatml - internlm/Agent-FLAN inference: false library_name: transformers license: other model-index: - name: out results: [] pipeline_tag: text-generation quantized_by: Suparious tags: - 4-bit - AWQ - text-generation - autotrain_compatible - generated_from_trainer - axolotl - endpoints_compatible --- # cognitivecomputations/dolphin-2.9.1-llama-3-8b AWQ - Model creator: [cognitivecomputations](https://huggingface.co/cognitivecomputations) - Original model: [dolphin-2.9.1-llama-3-8b](https://huggingface.co/cognitivecomputations/dolphin-2.9.1-llama-3-8b) <img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/ldkN1J0WIDQwU4vutGYiD.png" width="600" /> ## Model Summary Dolphin 2.9.1 Llama 3 8b 🐬 Curated and trained by Eric Hartford, Lucas Atkins, and Fernando Fernandes, and Cognitive Computations Discord: https://discord.gg/8fbBeC7ZGx We have retrained our LLama-3-8b fine tune to address behavioral issues in the initial 2.9 dataset. Specifically, Systemchat was causing the model to be *too* reliant on the system prompt. Additionally, it had an occasional quirk that would cause the model to overly reference the system prompt. We also found generation length was at times not sufficient for any given task. We identified the culprit as Ultrachat. Accounting for these concerns, we removed systemchat and ultrachat from the dataset. It is otherwise identical to dolphin-2.9. Our appreciation for the sponsors of Dolphin 2.9.1: - [Crusoe Cloud](https://crusoe.ai/) - provided excellent on-demand 8xL40S node This model is based on Llama-3-8b, and is governed by [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](LICENSE) The base model has 8k context, and the full-weight fine-tuning was with 4k sequence length. It took 1.5 days on an 8x L40S provided by Crusoe Cloud ## How to use ### Install the necessary packages ```bash pip install --upgrade autoawq autoawq-kernels ``` ### Example Python code ```python from awq import AutoAWQForCausalLM from transformers import AutoTokenizer, TextStreamer model_path = "solidrust/dolphin-2.9.1-llama-3-8b-AWQ" system_message = "You are dolphin-2.9.1-llama-3-8b, incarnated as a powerful AI. You were created by cognitivecomputations." # Load model model = AutoAWQForCausalLM.from_quantized(model_path, fuse_layers=True) tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True) # Convert prompt to tokens prompt_template = """\ <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant""" prompt = "You're standing on the surface of the Earth. "\ "You walk one mile south, one mile west and one mile north. "\ "You end up exactly where you started. Where are you?" tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt), return_tensors='pt').input_ids.cuda() # Generate output generation_output = model.generate(tokens, streamer=streamer, max_new_tokens=512) ``` ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead. It is supported by: - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types. - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
nathanialhunt/6ad20716-01cc-4251-9eb8-d18f6ae548ab
nathanialhunt
"2025-01-23T07:16:52Z"
6
0
peft
[ "peft", "safetensors", "dbrx", "axolotl", "generated_from_trainer", "base_model:katuni4ka/tiny-random-dbrx", "base_model:adapter:katuni4ka/tiny-random-dbrx", "region:us" ]
null
"2025-01-23T07:14:09Z"
--- library_name: peft base_model: katuni4ka/tiny-random-dbrx tags: - axolotl - generated_from_trainer model-index: - name: 6ad20716-01cc-4251-9eb8-d18f6ae548ab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: katuni4ka/tiny-random-dbrx bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - c540333983914c07_train_data.json ds_type: json format: custom path: /workspace/input_data/c540333983914c07_train_data.json type: field_instruction: sent2 field_output: ending0 format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: nathanialhunt/6ad20716-01cc-4251-9eb8-d18f6ae548ab hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 10 micro_batch_size: 2 mlflow_experiment_name: /tmp/c540333983914c07_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: b421b89d-5e84-408f-b1f2-add043a89b69 wandb_project: Birthday-SN56-5-Gradients-On-Demand wandb_run: your_name wandb_runid: b421b89d-5e84-408f-b1f2-add043a89b69 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 6ad20716-01cc-4251-9eb8-d18f6ae548ab This model is a fine-tuned version of [katuni4ka/tiny-random-dbrx](https://huggingface.co/katuni4ka/tiny-random-dbrx) on the None dataset. It achieves the following results on the evaluation set: - Loss: 11.5 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 46.0 | 0.0001 | 1 | 11.5 | | 46.0 | 0.0002 | 3 | 11.5 | | 46.0 | 0.0004 | 6 | 11.5 | | 46.0 | 0.0006 | 9 | 11.5 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Szczotar93/layoutlm_alltags
Szczotar93
"2024-04-16T14:03:50Z"
75
0
transformers
[ "transformers", "pytorch", "layoutlm", "token-classification", "generated_from_trainer", "dataset:layoutlmv4", "base_model:microsoft/layoutlm-base-uncased", "base_model:finetune:microsoft/layoutlm-base-uncased", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2024-04-16T13:59:34Z"
--- license: mit base_model: microsoft/layoutlm-base-uncased tags: - generated_from_trainer datasets: - layoutlmv4 model-index: - name: layoutlm_alltags results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # layoutlm_alltags This model is a fine-tuned version of [microsoft/layoutlm-base-uncased](https://huggingface.co/microsoft/layoutlm-base-uncased) on the layoutlmv4 dataset. It achieves the following results on the evaluation set: - Loss: 0.0891 - Customer Address: {'precision': 0.7764705882352941, 'recall': 0.8048780487804879, 'f1': 0.7904191616766466, 'number': 82} - Customer Name: {'precision': 0.6666666666666666, 'recall': 0.8333333333333334, 'f1': 0.7407407407407408, 'number': 12} - Invoice Number: {'precision': 0.8571428571428571, 'recall': 1.0, 'f1': 0.923076923076923, 'number': 12} - Tax Amount: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} - Total Amount: {'precision': 0.7142857142857143, 'recall': 0.9090909090909091, 'f1': 0.8, 'number': 11} - Vendor Name: {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 12} - Overall Precision: 0.7857 - Overall Recall: 0.8397 - Overall F1: 0.8118 - Overall Accuracy: 0.9801 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Customer Address | Customer Name | Invoice Number | Tax Amount | Total Amount | Vendor Name | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | |:-------------:|:-----:|:----:|:---------------:|:----------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------:|:----------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:| | 0.8211 | 6.67 | 20 | 0.3797 | {'precision': 0.25316455696202533, 'recall': 0.24390243902439024, 'f1': 0.24844720496894412, 'number': 82} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 12} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 12} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 11} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 12} | 0.2532 | 0.1527 | 0.1905 | 0.9050 | | 0.3036 | 13.33 | 40 | 0.1941 | {'precision': 0.6448598130841121, 'recall': 0.8414634146341463, 'f1': 0.73015873015873, 'number': 82} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 12} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 12} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 11} | {'precision': 0.75, 'recall': 0.75, 'f1': 0.75, 'number': 12} | 0.6555 | 0.5954 | 0.624 | 0.9493 | | 0.1537 | 20.0 | 60 | 0.1153 | {'precision': 0.7157894736842105, 'recall': 0.8292682926829268, 'f1': 0.768361581920904, 'number': 82} | {'precision': 0.35714285714285715, 'recall': 0.4166666666666667, 'f1': 0.3846153846153846, 'number': 12} | {'precision': 0.8461538461538461, 'recall': 0.9166666666666666, 'f1': 0.8799999999999999, 'number': 12} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 11} | {'precision': 0.8461538461538461, 'recall': 0.9166666666666666, 'f1': 0.8799999999999999, 'number': 12} | 0.7037 | 0.7252 | 0.7143 | 0.9663 | | 0.0862 | 26.67 | 80 | 0.0953 | {'precision': 0.8, 'recall': 0.8292682926829268, 'f1': 0.8143712574850299, 'number': 82} | {'precision': 0.6, 'recall': 0.75, 'f1': 0.6666666666666665, 'number': 12} | {'precision': 0.6666666666666666, 'recall': 1.0, 'f1': 0.8, 'number': 12} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 11} | {'precision': 0.9166666666666666, 'recall': 0.9166666666666666, 'f1': 0.9166666666666666, 'number': 12} | 0.7519 | 0.7634 | 0.7576 | 0.9757 | | 0.0509 | 33.33 | 100 | 0.0846 | {'precision': 0.7857142857142857, 'recall': 0.8048780487804879, 'f1': 0.7951807228915663, 'number': 82} | {'precision': 0.7333333333333333, 'recall': 0.9166666666666666, 'f1': 0.8148148148148148, 'number': 12} | {'precision': 0.8571428571428571, 'recall': 1.0, 'f1': 0.923076923076923, 'number': 12} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 1.0, 'recall': 0.5454545454545454, 'f1': 0.7058823529411764, 'number': 11} | {'precision': 0.8461538461538461, 'recall': 0.9166666666666666, 'f1': 0.8799999999999999, 'number': 12} | 0.8030 | 0.8092 | 0.8061 | 0.9775 | | 0.0354 | 40.0 | 120 | 0.0852 | {'precision': 0.7710843373493976, 'recall': 0.7804878048780488, 'f1': 0.7757575757575758, 'number': 82} | {'precision': 0.6666666666666666, 'recall': 0.8333333333333334, 'f1': 0.7407407407407408, 'number': 12} | {'precision': 0.8, 'recall': 1.0, 'f1': 0.888888888888889, 'number': 12} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.7142857142857143, 'recall': 0.9090909090909091, 'f1': 0.8, 'number': 11} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 12} | 0.7770 | 0.8244 | 0.8 | 0.9797 | | 0.0297 | 46.67 | 140 | 0.0891 | {'precision': 0.7764705882352941, 'recall': 0.8048780487804879, 'f1': 0.7904191616766466, 'number': 82} | {'precision': 0.6666666666666666, 'recall': 0.8333333333333334, 'f1': 0.7407407407407408, 'number': 12} | {'precision': 0.8571428571428571, 'recall': 1.0, 'f1': 0.923076923076923, 'number': 12} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.7142857142857143, 'recall': 0.9090909090909091, 'f1': 0.8, 'number': 11} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 12} | 0.7857 | 0.8397 | 0.8118 | 0.9801 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.2.0+cpu - Datasets 2.12.0 - Tokenizers 0.13.2
pavanpreet-gandhi/babyai-ppo-experiments-2025-03-29_20-17-58
pavanpreet-gandhi
"2025-03-29T21:06:38Z"
0
0
peft
[ "peft", "pytorch", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-3.2-3B-Instruct", "base_model:adapter:meta-llama/Llama-3.2-3B-Instruct", "region:us" ]
null
"2025-03-29T20:18:02Z"
<!DOCTYPE html> <html class="" lang="en"> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no" /> <meta name="description" content="We're on a journey to advance and democratize artificial intelligence through open source and open science." /> <meta property="fb:app_id" content="1321688464574422" /> <meta name="twitter:card" content="summary_large_image" /> <meta name="twitter:site" content="@huggingface" /> <meta property="og:title" content="Hugging Face - The AI community building the future." /> <meta property="og:type" content="website" /> <title>Hugging Face - The AI community building the future.</title> <style> body { margin: 0; } main { background-color: white; min-height: 100vh; padding: 7rem 1rem 8rem 1rem; text-align: center; font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system, BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans, sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol, Noto Color Emoji; } img { width: 6rem; height: 6rem; margin: 0 auto 1rem; } h1 { font-size: 3.75rem; line-height: 1; color: rgba(31, 41, 55, 1); font-weight: 700; box-sizing: border-box; margin: 0 auto; } p, a { color: rgba(107, 114, 128, 1); font-size: 1.125rem; line-height: 1.75rem; max-width: 28rem; box-sizing: border-box; margin: 0 auto; } .dark main { background-color: rgb(11, 15, 25); } .dark h1 { color: rgb(209, 213, 219); } .dark p, .dark a { color: rgb(156, 163, 175); } </style> <script> // On page load or when changing themes, best to add inline in `head` to avoid FOUC const key = "_tb_global_settings"; let theme = window.matchMedia("(prefers-color-scheme: dark)").matches ? "dark" : "light"; try { const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme; if (storageTheme) { theme = storageTheme === "dark" ? "dark" : "light"; } } catch (e) {} if (theme === "dark") { document.documentElement.classList.add("dark"); } else { document.documentElement.classList.remove("dark"); } </script> </head> <body> <main> <img src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg" alt="" /> <div> <h1>429</h1> <p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p> </div> </main> </body> </html>