modelId
stringlengths
5
138
author
stringlengths
2
42
last_modified
unknowndate
2020-02-15 11:33:14
2025-04-13 01:05:21
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
423 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
unknowndate
2022-03-02 23:29:04
2025-04-13 01:03:53
card
stringlengths
11
1.01M
H-D-T/Buzz-8b-Large-v0.5
H-D-T
"2024-05-14T05:21:54Z"
12
29
transformers
[ "transformers", "safetensors", "llama", "text-generation", "axolotl", "Alignment-Lab-AI", "Meta-Llama-3", "en", "dataset:H-D-T/Buzz", "arxiv:2403.08763", "arxiv:2310.05914", "arxiv:1903.01435", "arxiv:1712.07628", "arxiv:2306.02707", "base_model:Alignment-Lab-AI/Neural-network-medium-untuned-theta", "base_model:finetune:Alignment-Lab-AI/Neural-network-medium-untuned-theta", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-05-06T21:51:12Z"
--- base_model: Alignment-Lab-AI/Neural-network-medium-untuned-theta tags: - axolotl - Alignment-Lab-AI - Meta-Llama-3 model-index: - name: Buzz-8b-Large-0.5 results: [] license: apache-2.0 datasets: - H-D-T/Buzz language: - en --- [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6436279eaaef013d1af225c9/fWaQucBWfabfnMsAFN8hv.png) # Buzz-8b-Large: Advancing Efficiency through Iterative Fine-Tuning ## Introduction - [Alignment Lab AI](https://AlignmentLab.ai) is pleased to introduce our latest research efforts with: **Buzz-8b-Large**, a state-of-the-art language model developed in collaboration with [Hive Digital Technologies](https://hivedt.com/). The Buzz model, Dataset, and Code are to be released to build a toolkit that aims to demonstrate the potential for reuse and optimization of existing pretrained language models to continuously refine the heights of performance that can be achieved with optimal use of FlOps. Alongside Buzz-5b-Medium, we release - [The Buzz Dataset](https://huggingface.co/datasets/H-D-T/Buzz) - [Buzz-2.5b-Small](https://huggingface.co/tempbuzz/Lab-AI/Buzz-3b-Small-v0.5) - [Buzz-5b-Medium](https://huggingface.co/tempbuzz/Lab-AI/Buzz-5B-Medium-v0.5) - [Buzz-8B-Large](https://huggingface.co/tempbuzz/Lab-AI/Buzz-8B-Large-v0.5) we release this, the **Buzz dataset** and over the next few days, two additional models: **Buzz-3B-Small** and **Buzz-5B-Medium**, the codebase to refine, filter and augment the data, as well as prune and train your own variants, will additionally be released in the coming days. ## Iterative Fine-Tuning Methodology Our research builds upon the concepts introduced in several key papers, including: - [Simple and Scalable Strategies to Continually Pre-train Large Language Models](https://arxiv.org/abs/2403.08763) - [NEFTune: Noisy Embeddings Improve Instruction Finetuning](https://arxiv.org/abs/2310.05914) - [An Optimistic Acceleration of AMSGrad for Nonconvex Optimization](https://arxiv.org/abs/1903.01435) - [Improving Generalization Performance by Switching from Adam to SGD](https://arxiv.org/abs/1712.07628) - [Orca: Progressive Learning from Complex Explanation Traces of GPT-4](https://arxiv.org/abs/2306.02707v1) By combining high quality data, iterative fine-tuning with carefully selected "grounding" distributions from previous epochs, we have developed a cost-effective approach that pushes the boundaries of model reuse and optimization. ## notably, we observe that the models have not yet appeared to plateu with the application of these techniques ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6436279eaaef013d1af225c9/wyHyDIJnNmbomonZKQAD0.png) ## Chat Template and Inference To use the Buzz-8b-Medium model for chat-based tasks, you can utilize the provided chat template. Here's an example of how to perform inference using the Hugging Face Transformers library: ```python from transformers import AutoTokenizer, AutoModelForCausalLM # Load the tokenizer and model model_name = "H-D-T/Buzz-8b-Large-v0.5" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) # Set the device to run the model on (e.g., "cuda" for GPU, "cpu" for CPU) device = "cuda" if torch.cuda.is_available() else "cpu" model.to(device) # Define the input prompt prompt = "Hello, how are you today?" # Tokenize the input prompt input_ids = tokenizer.encode(prompt, return_tensors="pt").to(device) # Generate the model's response output = model.generate( input_ids, max_length=100, num_return_sequences=1, no_repeat_ngram_size=2, early_stopping=True ) # Decode the generated response response = tokenizer.decode(output[0], skip_special_tokens=True) print("Input:", prompt) print("Response:", response) `````` NOTE: this model is a COMPLETIONS model, it will generate text by default, which completes the text you send it, it only has a *start* <|begin_of_text|> and a *stop* token <|end_of_text|> if you want it to have conversations reliably, append <|end_of_text|>\n<|begin_of_text|>assistant: to the end of your prompt, [the speaker 'assistant' is flexible, and can be tooled to the type of response you want, for example "Mathematician:"" will give you a different type of response, than "felon:"] later iterations of the model will likely have formatting similar to *openchat* ## Conclusion We intend to focus on *updating* and improving the performance of these models, and surrounding open sourced infrastructure. Our next effort will focus on context and implementing the research currently being conducted by [Wing-Lian](https://github.com/winglian), the lead developer of the [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) training framework that underpins these experiments. We encourage the community to explore Wing-Lian's work, such as the [Llama-3-8b-64k-PoSE](https://huggingface.co/winglian/Llama-3-8b-64k-PoSE) and [llama-3-8b-256k-PoSE](https://huggingface.co/winglian/llama-3-8b-256k-PoSE) models, which showcase the potential for further advancements in language modeling. Buzz hopes to be a proof of concept, and a toolkit to demonstrate and enable the community in the pursuit of efficient and effective locally run, personally owned, language models. Through collaboration with [Hive Digital Technologies](https://hivedigitaltechnologies.com/) who have enabled us to perform this research, we have demonstrated the immense potential for model reuse and optimization. ## Credits to the many researchers who have open sourced their knowledge and tools to allow us to pursue this, to [Hive Digital Technologies](https://hivedigitaltechnologies.com/) for providing compute, advice, and meaningful research insight. to [Meta](https://llama.meta.com) for developing the Llama models, and maintaining a philosophy of supporting open research and open source. To wing et al. with [Open Access AI Collective](https://github.com/OpenAccess-AI-Collective) for developing [axolotl](https://github.com/OpenAccess-AI-Collective/axolotl), assisting with research, and generally being geniuses. to [Thomas Capelle](https://wandb.ai/capecape) et al. working on [LLM_Surgery](https://wandb.ai/llm_surgery) as well as many, many others who are too numerous to name. # Citations ``` @misc{ibrahim2024simple, title={Simple and Scalable Strategies to Continually Pre-train Large Language Models}, author={Adam Ibrahim and Benjamin Thérien and Kshitij Gupta and Mats L. Richter and Quentin Anthony and Timothée Lesort and Eugene Belilovsky and Irina Rish}, year={2024}, eprint={2403.08763}, archivePrefix={arXiv}, primaryClass={cs.LG} } @misc{jain2023neftune, title={NEFTune: Noisy Embeddings Improve Instruction Finetuning}, author={Neel Jain and Ping-yeh Chiang and Yuxin Wen and John Kirchenbauer and Hong-Min Chu and Gowthami Somepalli and Brian R. Bartoldson and Bhavya Kailkhura and Avi Schwarzschild and Aniruddha Saha and Micah Goldblum and Jonas Geiping and Tom Goldstein}, year={2023}, eprint={2310.05914}, archivePrefix={arXiv}, primaryClass={cs.CL} } @misc{wang2020optimistic, title={An Optimistic Acceleration of AMSGrad for Nonconvex Optimization}, author={Jun-Kun Wang and Xiaoyun Li and Belhal Karimi and Ping Li}, year={2020}, eprint={1903.01435}, archivePrefix={arXiv}, primaryClass={stat.ML} } @misc{keskar2017improving, title={Improving Generalization Performance by Switching from Adam to SGD}, author={Nitish Shirish Keskar and Richard Socher}, year={2017}, eprint={1712.07628}, archivePrefix={arXiv}, primaryClass={cs.LG} } @misc{mukherjee2023orca, title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4}, author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah}, year={2023}, eprint={2306.02707}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
PrunaAI/Gustavosta-MagicPrompt-Stable-Diffusion-bnb-8bit-smashed
PrunaAI
"2025-04-04T07:04:51Z"
4
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "pruna-ai", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "bitsandbytes", "region:us" ]
text-generation
"2024-04-04T23:06:20Z"
<!DOCTYPE html> <html class="" lang="en"> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no" /> <meta name="description" content="We're on a journey to advance and democratize artificial intelligence through open source and open science." /> <meta property="fb:app_id" content="1321688464574422" /> <meta name="twitter:card" content="summary_large_image" /> <meta name="twitter:site" content="@huggingface" /> <meta property="og:title" content="Hugging Face - The AI community building the future." /> <meta property="og:type" content="website" /> <title>Hugging Face - The AI community building the future.</title> <style> body { margin: 0; } main { background-color: white; min-height: 100vh; padding: 7rem 1rem 8rem 1rem; text-align: center; font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system, BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans, sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol, Noto Color Emoji; } img { width: 6rem; height: 6rem; margin: 0 auto 1rem; } h1 { font-size: 3.75rem; line-height: 1; color: rgba(31, 41, 55, 1); font-weight: 700; box-sizing: border-box; margin: 0 auto; } p, a { color: rgba(107, 114, 128, 1); font-size: 1.125rem; line-height: 1.75rem; max-width: 28rem; box-sizing: border-box; margin: 0 auto; } .dark main { background-color: rgb(11, 15, 25); } .dark h1 { color: rgb(209, 213, 219); } .dark p, .dark a { color: rgb(156, 163, 175); } </style> <script> // On page load or when changing themes, best to add inline in `head` to avoid FOUC const key = "_tb_global_settings"; let theme = window.matchMedia("(prefers-color-scheme: dark)").matches ? "dark" : "light"; try { const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme; if (storageTheme) { theme = storageTheme === "dark" ? "dark" : "light"; } } catch (e) {} if (theme === "dark") { document.documentElement.classList.add("dark"); } else { document.documentElement.classList.remove("dark"); } </script> </head> <body> <main> <img src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg" alt="" /> <div> <h1>429</h1> <p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p> </div> </main> </body> </html>
tpo-alignment/Llama-3-8B-TPO-40k
tpo-alignment
"2025-02-19T02:20:15Z"
11
0
null
[ "safetensors", "llama", "arxiv:2405.16681", "region:us" ]
null
"2025-01-23T00:30:37Z"
This is a model released from the preprint: [Triple Preference Optimization: Achieving Better Alignment using a Single Step Optimization](https://arxiv.org/abs/2405.16681). Please refer to our [repository](https://github.com/sahsaeedi/TPO/) for more details.
jjovalle99/deci7b-ft-lora-sql-v2
jjovalle99
"2024-03-06T16:50:26Z"
7
1
transformers
[ "transformers", "safetensors", "deci", "text-generation", "trl", "sft", "conversational", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "region:us" ]
text-generation
"2024-03-06T16:49:07Z"
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
souging/8ee41ab1-3b12-4059-8fbf-58d9314ee40c
souging
"2025-03-24T00:13:38Z"
0
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2.5-1.5B", "base_model:adapter:unsloth/Qwen2.5-1.5B", "license:apache-2.0", "region:us" ]
null
"2025-03-23T23:31:57Z"
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2.5-1.5B tags: - axolotl - generated_from_trainer model-index: - name: 8ee41ab1-3b12-4059-8fbf-58d9314ee40c results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Qwen2.5-1.5B bf16: auto dataset_prepared_path: null datasets: - data_files: - 609476ccaf890566_train_data.json ds_type: json format: custom path: /root/G.O.D-test/core/data/609476ccaf890566_train_data.json type: field_input: system field_instruction: prompt field_output: output format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 0 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: souging/8ee41ab1-3b12-4059-8fbf-58d9314ee40c hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 600 micro_batch_size: 5 mlflow_experiment_name: /tmp/609476ccaf890566_train_data.json model_type: AutoModelForCausalLM num_epochs: 4 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 0 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 849d3e07-2377-4625-a636-484db97281c7 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 849d3e07-2377-4625-a636-484db97281c7 warmup_steps: 150 weight_decay: 0.01 xformers_attention: null ``` </details><br> # 8ee41ab1-3b12-4059-8fbf-58d9314ee40c This model is a fine-tuned version of [unsloth/Qwen2.5-1.5B](https://huggingface.co/unsloth/Qwen2.5-1.5B) on the None dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 5 - eval_batch_size: 5 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 160 - total_eval_batch_size: 40 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 150 - training_steps: 600 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.0 | 0.5150 | 600 | nan | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.3
Bhotuya/TextSummarizerAI_Basic_v1
Bhotuya
"2024-04-23T08:11:32Z"
106
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-small", "base_model:finetune:google-t5/t5-small", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
"2024-04-23T07:36:11Z"
--- license: apache-2.0 base_model: google-t5/t5-small tags: - generated_from_trainer metrics: - rouge model-index: - name: TextSummarizerAI_Basic_v1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # TextSummarizerAI_Basic_v1 This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.3319 - Rouge1: 0.1985 - Rouge2: 0.1019 - Rougel: 0.1702 - Rougelsum: 0.17 - Gen Len: 19.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | No log | 1.0 | 62 | 2.7851 | 0.1315 | 0.0434 | 0.1122 | 0.1124 | 19.0 | | No log | 2.0 | 124 | 2.5568 | 0.1442 | 0.0559 | 0.1197 | 0.1193 | 19.0 | | No log | 3.0 | 186 | 2.4669 | 0.1536 | 0.062 | 0.127 | 0.1268 | 19.0 | | No log | 4.0 | 248 | 2.4149 | 0.1768 | 0.0786 | 0.1472 | 0.1472 | 19.0 | | No log | 5.0 | 310 | 2.3847 | 0.1947 | 0.0959 | 0.1653 | 0.1651 | 19.0 | | No log | 6.0 | 372 | 2.3634 | 0.1973 | 0.0999 | 0.1691 | 0.1688 | 19.0 | | No log | 7.0 | 434 | 2.3487 | 0.1981 | 0.1017 | 0.1704 | 0.1703 | 19.0 | | No log | 8.0 | 496 | 2.3404 | 0.1982 | 0.102 | 0.1706 | 0.1703 | 19.0 | | 2.7541 | 9.0 | 558 | 2.3333 | 0.199 | 0.1024 | 0.1711 | 0.1709 | 19.0 | | 2.7541 | 10.0 | 620 | 2.3319 | 0.1985 | 0.1019 | 0.1702 | 0.17 | 19.0 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
mradermacher/pirate-0.5-GGUF
mradermacher
"2024-12-23T18:16:17Z"
12
0
transformers
[ "transformers", "gguf", "trl", "sft", "generated_from_trainer", "en", "base_model:thangvip/pirate-0.5", "base_model:quantized:thangvip/pirate-0.5", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
"2024-12-23T18:12:39Z"
--- base_model: thangvip/pirate-0.5 language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - trl - sft - generated_from_trainer --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/thangvip/pirate-0.5 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/pirate-0.5-GGUF/resolve/main/pirate-0.5.Q2_K.gguf) | Q2_K | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/pirate-0.5-GGUF/resolve/main/pirate-0.5.Q3_K_S.gguf) | Q3_K_S | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/pirate-0.5-GGUF/resolve/main/pirate-0.5.Q3_K_M.gguf) | Q3_K_M | 0.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/pirate-0.5-GGUF/resolve/main/pirate-0.5.Q3_K_L.gguf) | Q3_K_L | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/pirate-0.5-GGUF/resolve/main/pirate-0.5.IQ4_XS.gguf) | IQ4_XS | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/pirate-0.5-GGUF/resolve/main/pirate-0.5.Q4_K_S.gguf) | Q4_K_S | 0.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/pirate-0.5-GGUF/resolve/main/pirate-0.5.Q4_K_M.gguf) | Q4_K_M | 0.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/pirate-0.5-GGUF/resolve/main/pirate-0.5.Q5_K_S.gguf) | Q5_K_S | 0.6 | | | [GGUF](https://huggingface.co/mradermacher/pirate-0.5-GGUF/resolve/main/pirate-0.5.Q5_K_M.gguf) | Q5_K_M | 0.6 | | | [GGUF](https://huggingface.co/mradermacher/pirate-0.5-GGUF/resolve/main/pirate-0.5.Q6_K.gguf) | Q6_K | 0.6 | very good quality | | [GGUF](https://huggingface.co/mradermacher/pirate-0.5-GGUF/resolve/main/pirate-0.5.Q8_0.gguf) | Q8_0 | 0.8 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/pirate-0.5-GGUF/resolve/main/pirate-0.5.f16.gguf) | f16 | 1.3 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
multimolecule/utrlm-mrl
multimolecule
"2025-02-27T10:37:59Z"
0
0
multimolecule
[ "multimolecule", "pytorch", "safetensors", "utrlm", "Biology", "RNA", "fill-mask", "rna", "dataset:multimolecule/ensembl-genome-browser", "license:agpl-3.0", "region:us" ]
fill-mask
"2025-02-27T10:37:55Z"
--- language: rna tags: - Biology - RNA license: agpl-3.0 datasets: - multimolecule/ensembl-genome-browser library_name: multimolecule pipeline_tag: fill-mask mask_token: "<mask>" widget: - example_title: "HIV-1" text: "GGUC<mask>CUCUGGUUAGACCAGAUCUGAGCCU" output: - label: "*" score: 0.07707168161869049 - label: "<null>" score: 0.07588472962379456 - label: "U" score: 0.07178673148155212 - label: "N" score: 0.06414645165205002 - label: "Y" score: 0.06385370343923569 - example_title: "microRNA-21" text: "UAGC<mask>UAUCAGACUGAUGUUG" output: - label: "*" score: 0.07969731837511063 - label: "<null>" score: 0.07818876206874847 - label: "A" score: 0.07302683591842651 - label: "N" score: 0.06714905053377151 - label: "W" score: 0.0667526125907898 --- # UTR-LM Pre-trained model on 5’ untranslated region (5’UTR) using masked language modeling (MLM), Secondary Structure (SS), and Minimum Free Energy (MFE) objectives. ## Statement _A 5’ UTR Language Model for Decoding Untranslated Regions of mRNA and Function Predictions_ is published in [Nature Machine Intelligence](https://doi.org/10.1038/s42256-024-00823-9), which is a Closed Access / Author-Fee journal. > Machine learning has been at the forefront of the movement for free and open access to research. > > We see no role for closed access or author-fee publication in the future of machine learning research and believe the adoption of these journals as an outlet of record for the machine learning community would be a retrograde step. The MultiMolecule team is committed to the principles of open access and open science. We do NOT endorse the publication of manuscripts in Closed Access / Author-Fee journals and encourage the community to support Open Access journals and conferences. Please consider signing the [Statement on Nature Machine Intelligence](https://openaccess.engineering.oregonstate.edu). ## Disclaimer This is an UNOFFICIAL implementation of the [A 5’ UTR Language Model for Decoding Untranslated Regions of mRNA and Function Predictions](https://doi.org/10.1101/2023.10.11.561938) by Yanyi Chu, Dan Yu, et al. The OFFICIAL repository of UTR-LM is at [a96123155/UTR-LM](https://github.com/a96123155/UTR-LM). > [!CAUTION] > The MultiMolecule team is unable to confirm that the provided model and checkpoints are producing the same intermediate representations as the original implementation. > This is because > > The proposed method is published in a Closed Access / Author-Fee journal. **The team releasing UTR-LM did not write this model card for this model so this model card has been written by the MultiMolecule team.** ## Model Details UTR-LM is a [bert](https://huggingface.co/google-bert/bert-base-uncased)-style model pre-trained on a large corpus of 5’ untranslated regions (5’UTRs) in a self-supervised fashion. This means that the model was trained on the raw nucleotides of RNA sequences only, with an automatic process to generate inputs and labels from those texts. Please refer to the [Training Details](#training-details) section for more information on the training process. ### Variations - **[`multimolecule/utrlm-te_el`](https://huggingface.co/multimolecule/utrlm-te_el)**: The UTR-LM model for Translation Efficiency of transcripts and mRNA Expression Level. - **[`multimolecule/utrlm-mrl`](https://huggingface.co/multimolecule/utrlm-mrl)**: The UTR-LM model for Mean Ribosome Loading. ### Model Specification <table> <thead> <tr> <th>Variants</th> <th>Num Layers</th> <th>Hidden Size</th> <th>Num Heads</th> <th>Intermediate Size</th> <th>Num Parameters (M)</th> <th>FLOPs (G)</th> <th>MACs (G)</th> <th>Max Num Tokens</th> </tr> </thead> <tbody> <tr> <td>UTR-LM MRL</td> <td rowspan="2">6</td> <td rowspan="2">128</td> <td rowspan="2">16</td> <td rowspan="2">512</td> <td rowspan="2">1.21</td> <td rowspan="2">0.35</td> <td rowspan="2">0.18</td> <td rowspan="2">1022</td> </tr> <tr> <td>UTR-LM TE_EL</td> </tr> </tbody> </table> ### Links - **Code**: [multimolecule.utrlm](https://github.com/DLS5-Omics/multimolecule/tree/master/multimolecule/models/utrlm) - **Data**: - [Ensembl Genome Browser](https://ensembl.org) - [Human 5′ UTR design and variant effect prediction from a massively parallel translation assay](https://doi.org/10.1038/s41587-019-0164-5) - [High-Throughput 5’ UTR Engineering for Enhanced Protein Production in Non-Viral Gene Therapies](https://doi.org/10.1101/2021.10.14.464013) - **Paper**: [A 5’ UTR Language Model for Decoding Untranslated Regions of mRNA and Function Predictions](http://doi.org/10.1038/s41467-021-24436-7) - **Developed by**: Yanyi Chu, Dan Yu, Yupeng Li, Kaixuan Huang, Yue Shen, Le Cong, Jason Zhang, Mengdi Wang - **Model type**: [BERT](https://huggingface.co/google-bert/bert-base-uncased) - [ESM](https://huggingface.co/facebook/esm2_t48_15B_UR50D) - **Original Repository**: [a96123155/UTR-LM](https://github.com/a96123155/UTR-LM) ## Usage The model file depends on the [`multimolecule`](https://multimolecule.danling.org) library. You can install it using pip: ```bash pip install multimolecule ``` ### Direct Use #### Masked Language Modeling You can use this model directly with a pipeline for masked language modeling: ```python >>> import multimolecule # you must import multimolecule to register models >>> from transformers import pipeline >>> unmasker = pipeline("fill-mask", model="multimolecule/utrlm-te_el") >>> unmasker("gguc<mask>cucugguuagaccagaucugagccu") [{'score': 0.07707168161869049, 'token': 23, 'token_str': '*', 'sequence': 'G G U C * C U C U G G U U A G A C C A G A U C U G A G C C U'}, {'score': 0.07588472962379456, 'token': 5, 'token_str': '<null>', 'sequence': 'G G U C C U C U G G U U A G A C C A G A U C U G A G C C U'}, {'score': 0.07178673148155212, 'token': 9, 'token_str': 'U', 'sequence': 'G G U C U C U C U G G U U A G A C C A G A U C U G A G C C U'}, {'score': 0.06414645165205002, 'token': 10, 'token_str': 'N', 'sequence': 'G G U C N C U C U G G U U A G A C C A G A U C U G A G C C U'}, {'score': 0.06385370343923569, 'token': 12, 'token_str': 'Y', 'sequence': 'G G U C Y C U C U G G U U A G A C C A G A U C U G A G C C U'}] ``` #### RNA Secondary Structure Prediction You can use this model to predict the secondary structure of an RNA sequence: ```python >>> import multimolecule # you must import multimolecule to register models >>> from transformers import pipeline >>> predictor = pipeline("rna-secondary-structure", model="multimolecule/utrlm-mrl") >>> predictor("ggucuc") {'sequence': 'G G U C U C', 'secondary_structure': '......', 'contact_map': [[0.4812554121017456, 0.47794032096862793, 0.4789176285266876, 0.48823264241218567, 0.474841445684433, 0.4968946874141693], [0.47794032096862793, 0.49345624446868896, 0.48480257391929626, 0.4933702051639557, 0.4595194160938263, 0.48904451727867126], [0.4789176285266876, 0.48480257391929626, 0.489326536655426, 0.49098923802375793, 0.48537197709083557, 0.4686800539493561], [0.48823264241218567, 0.4933702051639557, 0.49098923802375793, 0.4644699990749359, 0.49569272994995117, 0.4653873145580292], [0.474841445684433, 0.4595194160938263, 0.48537197709083557, 0.49569272994995117, 0.48744988441467285, 0.4952647387981415], [0.4968946874141693, 0.48904451727867126, 0.4686800539493561, 0.4653873145580292, 0.4952647387981415, 0.4828569293022156]]} ``` ### Downstream Use #### Extract Features Here is how to use this model to get the features of a given sequence in PyTorch: ```python from multimolecule import RnaTokenizer, UtrLmModel tokenizer = RnaTokenizer.from_pretrained("multimolecule/utrlm-te_el") model = UtrLmModel.from_pretrained("multimolecule/utrlm-te_el") text = "UAGCUUAUCAGACUGAUGUUG" input = tokenizer(text, return_tensors="pt") output = model(**input) ``` #### Sequence Classification / Regression > [!NOTE] > This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for sequence classification or regression. Here is how to use this model as backbone to fine-tune for a sequence-level task in PyTorch: ```python import torch from multimolecule import RnaTokenizer, UtrLmForSequencePrediction tokenizer = RnaTokenizer.from_pretrained("multimolecule/utrlm-te_el") model = UtrLmForSequencePrediction.from_pretrained("multimolecule/utrlm-te_el") text = "UAGCUUAUCAGACUGAUGUUG" input = tokenizer(text, return_tensors="pt") label = torch.tensor([1]) output = model(**input, labels=label) ``` #### Token Classification / Regression > [!NOTE] > This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for token classification or regression. Here is how to use this model as backbone to fine-tune for a nucleotide-level task in PyTorch: ```python import torch from multimolecule import RnaTokenizer, UtrLmForTokenPrediction tokenizer = RnaTokenizer.from_pretrained("multimolecule/utrlm-te_el") model = UtrLmForTokenPrediction.from_pretrained("multimolecule/utrlm-te_el") text = "UAGCUUAUCAGACUGAUGUUG" input = tokenizer(text, return_tensors="pt") label = torch.randint(2, (len(text), )) output = model(**input, labels=label) ``` #### Contact Classification / Regression > [!NOTE] > This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for contact classification or regression. Here is how to use this model as backbone to fine-tune for a contact-level task in PyTorch: ```python import torch from multimolecule import RnaTokenizer, UtrLmForContactPrediction tokenizer = RnaTokenizer.from_pretrained("multimolecule/utrlm-te_el") model = UtrLmForContactPrediction.from_pretrained("multimolecule/utrlm-te_el") text = "UAGCUUAUCAGACUGAUGUUG" input = tokenizer(text, return_tensors="pt") label = torch.randint(2, (len(text), len(text))) output = model(**input, labels=label) ``` ## Training Details UTR-LM used a mixed training strategy with one self-supervised task and two supervised tasks, where the labels of both supervised tasks are calculated using [ViennaRNA](https://viennarna.readthedocs.io). 1. **Masked Language Modeling (MLM)**: taking a sequence, the model randomly masks 15% of the tokens in the input then runs the entire masked sentence through the model and has to predict the masked tokens. This is comparable to the Cloze task in language modeling. 2. **Secondary Structure (SS)**: predicting the secondary structure of the `<mask>` token in the MLM task. 3. **Minimum Free Energy (MFE)**: predicting the minimum free energy of the 5’ UTR sequence. ### Training Data The UTR-LM model was pre-trained on 5’ UTR sequences from three sources: - **[Ensembl Genome Browser](https://ensembl.org)**: Ensembl is a genome browser for vertebrate genomes that supports research in comparative genomics, evolution, sequence variation and transcriptional regulation. UTR-LM used 5’ UTR sequences from 5 species: human, rat, mouse, chicken, and zebrafish, since these species have high-quality and manual gene annotations. - **[Human 5′ UTR design and variant effect prediction from a massively parallel translation assay](https://doi.org/10.1038/s41587-019-0164-5)**: Sample et al. proposed 8 distinct 5' UTR libraries, each containing random 50 nucleotide sequences, to evaluate translation rules using mean ribosome loading (MRL) measurements. - **[High-Throughput 5’ UTR Engineering for Enhanced Protein Production in Non-Viral Gene Therapies](https://doi.org/10.1038/s41467-021-24436-7)**: Cao et al. analyzed endogenous human 5’ UTRs, including data from 3 distinct cell lines/tissues: human embryonic kidney 293T (HEK), human prostate cancer cell (PC3), and human muscle tissue (Muscle). UTR-LM preprocessed the 5’ UTR sequences in a 4-step pipeline: 1. removed all coding sequence (CDS) and non-5' UTR fragments from the raw sequences. 2. identified and removed duplicate sequences 3. truncated the sequences to fit within a range of 30 to 1022 bp 4. filtered out incorrect and low-quality sequences Note [`RnaTokenizer`][multimolecule.RnaTokenizer] will convert "T"s to "U"s for you, you may disable this behaviour by passing `replace_T_with_U=False`. ### Training Procedure #### Preprocessing UTR-LM used masked language modeling (MLM) as one of the pre-training objectives. The masking procedure is similar to the one used in BERT: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `<mask>`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. #### PreTraining The model was trained on two clusters: 1. 4 NVIDIA V100 GPUs with 16GiB memories. 2. 4 NVIDIA P100 GPUs with 32GiB memories. ## Citation **BibTeX**: ```bibtex @article {chu2023a, author = {Chu, Yanyi and Yu, Dan and Li, Yupeng and Huang, Kaixuan and Shen, Yue and Cong, Le and Zhang, Jason and Wang, Mengdi}, title = {A 5{\textquoteright} UTR Language Model for Decoding Untranslated Regions of mRNA and Function Predictions}, elocation-id = {2023.10.11.561938}, year = {2023}, doi = {10.1101/2023.10.11.561938}, publisher = {Cold Spring Harbor Laboratory}, abstract = {The 5{\textquoteright} UTR, a regulatory region at the beginning of an mRNA molecule, plays a crucial role in regulating the translation process and impacts the protein expression level. Language models have showcased their effectiveness in decoding the functions of protein and genome sequences. Here, we introduced a language model for 5{\textquoteright} UTR, which we refer to as the UTR-LM. The UTR-LM is pre-trained on endogenous 5{\textquoteright} UTRs from multiple species and is further augmented with supervised information including secondary structure and minimum free energy. We fine-tuned the UTR-LM in a variety of downstream tasks. The model outperformed the best-known benchmark by up to 42\% for predicting the Mean Ribosome Loading, and by up to 60\% for predicting the Translation Efficiency and the mRNA Expression Level. The model also applies to identifying unannotated Internal Ribosome Entry Sites within the untranslated region and improves the AUPR from 0.37 to 0.52 compared to the best baseline. Further, we designed a library of 211 novel 5{\textquoteright} UTRs with high predicted values of translation efficiency and evaluated them via a wet-lab assay. Experiment results confirmed that our top designs achieved a 32.5\% increase in protein production level relative to well-established 5{\textquoteright} UTR optimized for therapeutics.Competing Interest StatementThe authors have declared no competing interest.}, URL = {https://www.biorxiv.org/content/early/2023/10/14/2023.10.11.561938}, eprint = {https://www.biorxiv.org/content/early/2023/10/14/2023.10.11.561938.full.pdf}, journal = {bioRxiv} } ``` ## Contact Please use GitHub issues of [MultiMolecule](https://github.com/DLS5-Omics/multimolecule/issues) for any questions or comments on the model card. Please contact the authors of the [UTR-LM paper](https://doi.org/10.1101/2023.10.11.561938) for questions or comments on the paper/model. ## License This model is licensed under the [AGPL-3.0 License](https://www.gnu.org/licenses/agpl-3.0.html). ```spdx SPDX-License-Identifier: AGPL-3.0-or-later ```
itachi3939/DeepSeek-R1-Medical-COT-Qwen-1.5B-16bit-GGUF
itachi3939
"2025-02-25T16:19:17Z"
0
0
transformers
[ "transformers", "gguf", "qwen2", "text-generation-inference", "unsloth", "en", "base_model:unsloth/DeepSeek-R1-Distill-Qwen-1.5B-unsloth-bnb-4bit", "base_model:quantized:unsloth/DeepSeek-R1-Distill-Qwen-1.5B-unsloth-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-02-25T16:17:22Z"
--- base_model: unsloth/DeepSeek-R1-Distill-Qwen-1.5B-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2 - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** itachi3939 - **License:** apache-2.0 - **Finetuned from model :** unsloth/DeepSeek-R1-Distill-Qwen-1.5B-unsloth-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
socks22/q-FrozenLake-v1-4x4-noSlippery
socks22
"2023-12-30T01:21:03Z"
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
"2023-12-30T01:21:02Z"
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="socks22/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
mradermacher/Suprema-32B-v1-i1-GGUF
mradermacher
"2025-03-09T04:05:37Z"
0
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:suayptalha/Suprema-32B-v1", "base_model:quantized:suayptalha/Suprema-32B-v1", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
"2025-03-09T00:46:03Z"
--- base_model: suayptalha/Suprema-32B-v1 language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/suayptalha/Suprema-32B-v1 <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Suprema-32B-v1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Suprema-32B-v1-i1-GGUF/resolve/main/Suprema-32B-v1.i1-IQ1_S.gguf) | i1-IQ1_S | 7.4 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Suprema-32B-v1-i1-GGUF/resolve/main/Suprema-32B-v1.i1-IQ1_M.gguf) | i1-IQ1_M | 8.0 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Suprema-32B-v1-i1-GGUF/resolve/main/Suprema-32B-v1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.1 | | | [GGUF](https://huggingface.co/mradermacher/Suprema-32B-v1-i1-GGUF/resolve/main/Suprema-32B-v1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.1 | | | [GGUF](https://huggingface.co/mradermacher/Suprema-32B-v1-i1-GGUF/resolve/main/Suprema-32B-v1.i1-IQ2_S.gguf) | i1-IQ2_S | 10.5 | | | [GGUF](https://huggingface.co/mradermacher/Suprema-32B-v1-i1-GGUF/resolve/main/Suprema-32B-v1.i1-IQ2_M.gguf) | i1-IQ2_M | 11.4 | | | [GGUF](https://huggingface.co/mradermacher/Suprema-32B-v1-i1-GGUF/resolve/main/Suprema-32B-v1.i1-Q2_K_S.gguf) | i1-Q2_K_S | 11.6 | very low quality | | [GGUF](https://huggingface.co/mradermacher/Suprema-32B-v1-i1-GGUF/resolve/main/Suprema-32B-v1.i1-Q2_K.gguf) | i1-Q2_K | 12.4 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Suprema-32B-v1-i1-GGUF/resolve/main/Suprema-32B-v1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 12.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Suprema-32B-v1-i1-GGUF/resolve/main/Suprema-32B-v1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 13.8 | | | [GGUF](https://huggingface.co/mradermacher/Suprema-32B-v1-i1-GGUF/resolve/main/Suprema-32B-v1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 14.5 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Suprema-32B-v1-i1-GGUF/resolve/main/Suprema-32B-v1.i1-IQ3_S.gguf) | i1-IQ3_S | 14.5 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Suprema-32B-v1-i1-GGUF/resolve/main/Suprema-32B-v1.i1-IQ3_M.gguf) | i1-IQ3_M | 14.9 | | | [GGUF](https://huggingface.co/mradermacher/Suprema-32B-v1-i1-GGUF/resolve/main/Suprema-32B-v1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 16.0 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Suprema-32B-v1-i1-GGUF/resolve/main/Suprema-32B-v1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 17.3 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Suprema-32B-v1-i1-GGUF/resolve/main/Suprema-32B-v1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 17.8 | | | [GGUF](https://huggingface.co/mradermacher/Suprema-32B-v1-i1-GGUF/resolve/main/Suprema-32B-v1.i1-Q4_0.gguf) | i1-Q4_0 | 18.8 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Suprema-32B-v1-i1-GGUF/resolve/main/Suprema-32B-v1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 18.9 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Suprema-32B-v1-i1-GGUF/resolve/main/Suprema-32B-v1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 19.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Suprema-32B-v1-i1-GGUF/resolve/main/Suprema-32B-v1.i1-Q4_1.gguf) | i1-Q4_1 | 20.7 | | | [GGUF](https://huggingface.co/mradermacher/Suprema-32B-v1-i1-GGUF/resolve/main/Suprema-32B-v1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 22.7 | | | [GGUF](https://huggingface.co/mradermacher/Suprema-32B-v1-i1-GGUF/resolve/main/Suprema-32B-v1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 23.4 | | | [GGUF](https://huggingface.co/mradermacher/Suprema-32B-v1-i1-GGUF/resolve/main/Suprema-32B-v1.i1-Q6_K.gguf) | i1-Q6_K | 27.0 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
LeKyks1/poca-SoccerTwos
LeKyks1
"2023-11-21T20:58:57Z"
1
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SoccerTwos", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos", "region:us" ]
reinforcement-learning
"2023-11-21T20:58:08Z"
--- library_name: ml-agents tags: - SoccerTwos - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: LeKyks1/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
YongjieNiu/prior_dora-xl-cat-100-1
YongjieNiu
"2024-06-08T08:40:34Z"
1
0
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "license:openrail++", "region:us" ]
text-to-image
"2024-06-08T08:20:44Z"
--- license: openrail++ library_name: diffusers tags: - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers base_model: SDXL_model instance_prompt: a photo of rcn cat widget: - text: a photo of rcn cat by the sea output: url: image_0.png - text: a photo of rcn cat by the sea output: url: image_1.png - text: a photo of rcn cat by the sea output: url: image_2.png - text: a photo of rcn cat by the sea output: url: image_3.png --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - YongjieNiu/prior_dora-xl-cat-100-1 <Gallery /> ## Model description These are YongjieNiu/prior_dora-xl-cat-100-1 LoRA adaption weights for SDXL_model. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: VAE. ## Trigger words You should use a photo of rcn cat to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](YongjieNiu/prior_dora-xl-cat-100-1/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
MaziyarPanahi/Ninja-V3-nsfw-enhance-GGUF
MaziyarPanahi
"2024-12-06T13:39:46Z"
81
1
null
[ "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "text-generation", "base_model:swdq/Ninja-V3-nsfw-enhance", "base_model:quantized:swdq/Ninja-V3-nsfw-enhance", "region:us" ]
text-generation
"2024-12-06T13:19:14Z"
--- base_model: swdq/Ninja-V3-nsfw-enhance inference: false model_creator: swdq model_name: Ninja-V3-nsfw-enhance-GGUF pipeline_tag: text-generation quantized_by: MaziyarPanahi tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - text-generation --- # [MaziyarPanahi/Ninja-V3-nsfw-enhance-GGUF](https://huggingface.co/MaziyarPanahi/Ninja-V3-nsfw-enhance-GGUF) - Model creator: [swdq](https://huggingface.co/swdq) - Original model: [swdq/Ninja-V3-nsfw-enhance](https://huggingface.co/swdq/Ninja-V3-nsfw-enhance) ## Description [MaziyarPanahi/Ninja-V3-nsfw-enhance-GGUF](https://huggingface.co/MaziyarPanahi/Ninja-V3-nsfw-enhance-GGUF) contains GGUF format model files for [swdq/Ninja-V3-nsfw-enhance](https://huggingface.co/swdq/Ninja-V3-nsfw-enhance). ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
Jonjew/UmaThurman1990s
Jonjew
"2025-03-08T19:51:31Z"
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:unknown", "region:us" ]
text-to-image
"2025-03-08T19:51:24Z"
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: >- <lora:Uma_Thurman_1990s_Flux_r1:1.2> beautiful detailed photograph, blonde hair cascading over her shoulders makeup, wearing a boatneck dress, standing in a cafe, smiling output: url: images/00031-1334423513.png base_model: black-forest-labs/FLUX.1-dev instance_prompt: null license: unknown --- # Uma Thurman 1990s <Gallery /> ## Model description FROM https:&#x2F;&#x2F;civitai.com&#x2F;models&#x2F;957770&#x2F;uma-thurman-1990s-flux Strength 1 ## Download model Weights for this model are available in Safetensors format. [Download](/Jonjew/UmaThurman1990s/tree/main) them in the Files & versions tab.
havinash-ai/c5feaada-eb55-42bc-9402-2d6bf3824df4
havinash-ai
"2025-02-03T14:47:11Z"
8
0
peft
[ "peft", "safetensors", "gemma2", "axolotl", "generated_from_trainer", "base_model:UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2", "base_model:adapter:UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2", "license:gemma", "region:us" ]
null
"2025-02-03T14:35:52Z"
--- library_name: peft license: gemma base_model: UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2 tags: - axolotl - generated_from_trainer model-index: - name: c5feaada-eb55-42bc-9402-2d6bf3824df4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2 bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - bb477885411926f5_train_data.json ds_type: json format: custom path: /workspace/input_data/bb477885411926f5_train_data.json type: field_input: comment field_instruction: prompt field_output: chosen format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: havinash-ai/c5feaada-eb55-42bc-9402-2d6bf3824df4 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/bb477885411926f5_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 34a20bc2-fcbe-44a0-988b-720bf4465c7f wandb_project: Birthday-SN56-9-Gradients-On-Demand wandb_run: your_name wandb_runid: 34a20bc2-fcbe-44a0-988b-720bf4465c7f warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # c5feaada-eb55-42bc-9402-2d6bf3824df4 This model is a fine-tuned version of [UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2](https://huggingface.co/UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2901 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0007 | 1 | 0.6165 | | 0.2918 | 0.0357 | 50 | 0.3071 | | 0.2869 | 0.0714 | 100 | 0.2972 | | 0.2623 | 0.1070 | 150 | 0.2917 | | 0.3361 | 0.1427 | 200 | 0.2901 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
FoodIntake/flan-t5-large-portion-to-qu-task
FoodIntake
"2024-11-05T20:22:10Z"
4
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "autotrain", "dataset:autotrain-sornm-46488/autotrain-data", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
"2024-02-02T16:17:37Z"
--- tags: - autotrain - text2text-generation widget: - text: "I love AutoTrain" datasets: - autotrain-sornm-46488/autotrain-data --- Qualitative food measurements into grams - Approximate food portion descriptors | input | output | |-----------------|-----------------| | "3 lime wedges" | 28.0 g | | "1 large clove of garlic" | 15.0 g | | large apple | 236.59 g | # Model Trained Using AutoTrain - Problem type: Seq2Seq ## Validation Metrics loss: 0.4644700586795807 rouge1: 75.8017 rouge2: 63.419 rougeL: 75.7333 rougeLsum: 75.7431 gen_len: 5.5457 runtime: 191.5265 samples_per_second: 53.403 steps_per_second: 3.342 : 3.0
the-most-beautiful-girl-in-the-world/Wacth-the-most-beautiful-girl-in-the-world-full-indo-sub-movie-netflix-hd
the-most-beautiful-girl-in-the-world
"2025-02-16T18:23:20Z"
0
0
null
[ "region:us" ]
null
"2025-02-16T18:22:09Z"
<a href="https://poltulive.site/en/movie/1393069/the-most-beautiful?hug"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif"></a> <a href="https://poltulive.site/en/movie/1393069/the-most-beautiful?hug">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)</a> </br> <a href="https://poltulive.site/en/movie/1393069/the-most-beautiful?hug">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤)</a> </br>
Euanyu/GERBERA-RNA
Euanyu
"2024-05-21T21:29:58Z"
34
2
transformers
[ "transformers", "pytorch", "roberta", "license:mit", "endpoints_compatible", "region:us" ]
null
"2024-05-21T21:26:34Z"
--- license: mit --- The GERBERA BioNER model for identifying RNA types, trained on the JNLPBA-rna dataset and GUM-Time.
StarDawn-VirtualSinger/Ningxi-For-DiffSinger-OpenUtau-Package
StarDawn-VirtualSinger
"2024-04-26T13:07:39Z"
0
0
null
[ "Diffsinger", "OpenUtau", "zh", "dataset:Q78KG/Ningxi-for-Diffsinger-dataset", "license:cc-by-nc-nd-4.0", "region:us" ]
null
"2023-08-19T12:49:34Z"
--- datasets: - Q78KG/Ningxi-for-Diffsinger-dataset language: - zh tags: - Diffsinger - OpenUtau license: cc-by-nc-nd-4.0 ---
VERSIL91/2a14ef0d-0d5d-4f8d-972e-0eb544927040
VERSIL91
"2025-01-11T15:55:10Z"
17
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "custom_code", "base_model:NousResearch/Yarn-Solar-10b-32k", "base_model:adapter:NousResearch/Yarn-Solar-10b-32k", "license:apache-2.0", "region:us" ]
null
"2025-01-11T15:41:37Z"
--- library_name: peft license: apache-2.0 base_model: NousResearch/Yarn-Solar-10b-32k tags: - axolotl - generated_from_trainer model-index: - name: 2a14ef0d-0d5d-4f8d-972e-0eb544927040 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml accelerate_config: dynamo_backend: inductor mixed_precision: bf16 num_machines: 1 num_processes: auto use_cpu: false adapter: lora base_model: NousResearch/Yarn-Solar-10b-32k bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 54e48cc11e123e56_train_data.json ds_type: json format: custom path: /workspace/input_data/54e48cc11e123e56_train_data.json type: field_instruction: prompt field_output: completion format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 16 gradient_checkpointing: true group_by_length: false hub_model_id: VERSIL91/2a14ef0d-0d5d-4f8d-972e-0eb544927040 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lora_target_modules: - q_proj - v_proj lr_scheduler: cosine max_memory: 0: 70GiB max_steps: 20 micro_batch_size: 2 mlflow_experiment_name: /tmp/54e48cc11e123e56_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true quantization_config: llm_int8_enable_fp32_cpu_offload: true load_in_8bit: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 special_tokens: pad_token: </s> strict: false tf32: false tokenizer_type: AutoTokenizer torch_compile: true train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 2a14ef0d-0d5d-4f8d-972e-0eb544927040 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 2a14ef0d-0d5d-4f8d-972e-0eb544927040 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 2a14ef0d-0d5d-4f8d-972e-0eb544927040 This model is a fine-tuned version of [NousResearch/Yarn-Solar-10b-32k](https://huggingface.co/NousResearch/Yarn-Solar-10b-32k) on the None dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.0 | 0.0070 | 1 | nan | | 0.0 | 0.0351 | 5 | nan | | 0.0 | 0.0702 | 10 | nan | | 0.0 | 0.1053 | 15 | nan | | 0.0 | 0.1404 | 20 | nan | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
fats-fme/27c99032-e411-4a5e-a46e-ac68b83279fb
fats-fme
"2025-03-05T22:31:31Z"
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:openlm-research/open_llama_3b", "base_model:adapter:openlm-research/open_llama_3b", "license:apache-2.0", "region:us" ]
null
"2025-03-05T21:51:40Z"
--- library_name: peft license: apache-2.0 base_model: openlm-research/open_llama_3b tags: - axolotl - generated_from_trainer model-index: - name: 27c99032-e411-4a5e-a46e-ac68b83279fb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: openlm-research/open_llama_3b bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 78ae7b77ded523da_train_data.json ds_type: json format: custom path: /workspace/input_data/78ae7b77ded523da_train_data.json type: field_instruction: s3_key field_output: default_caption format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto early_stopping_patience: 3 eval_max_new_tokens: 128 eval_steps: 100 eval_table_size: null evals_per_epoch: null flash_attention: false fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: false hub_model_id: fats-fme/27c99032-e411-4a5e-a46e-ac68b83279fb hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 256 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 128 lora_target_linear: true lr_scheduler: cosine max_memory: 0: 70GB max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/78ae7b77ded523da_train_data.json model_type: AutoModelForCausalLM num_epochs: 10 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 100 saves_per_epoch: null sequence_len: 2048 special_tokens: pad_token: </s> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: acd865fe-1dca-40b3-9ae1-8a2d395c2c47 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: acd865fe-1dca-40b3-9ae1-8a2d395c2c47 warmup_steps: 100 weight_decay: 0.05 xformers_attention: null ``` </details><br> # 27c99032-e411-4a5e-a46e-ac68b83279fb This model is a fine-tuned version of [openlm-research/open_llama_3b](https://huggingface.co/openlm-research/open_llama_3b) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0063 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0003 | 1 | 0.6862 | | 0.0162 | 0.0259 | 100 | 0.0119 | | 0.0044 | 0.0517 | 200 | 0.0063 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Alwaly/parler-tts-wolof-mini-v1
Alwaly
"2024-10-27T18:48:26Z"
49
0
transformers
[ "transformers", "safetensors", "parler_tts", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
"2024-10-24T12:19:04Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
bawin/qwen-2_5-14B-lora-3ep-chinese-translation_v2
bawin
"2025-03-27T06:26:57Z"
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:unsloth/Qwen2.5-14B-Instruct", "base_model:finetune:unsloth/Qwen2.5-14B-Instruct", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2025-03-27T06:16:46Z"
--- base_model: unsloth/Qwen2.5-14B-Instruct tags: - text-generation-inference - transformers - unsloth - qwen2 - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** bawin - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-14B-Instruct This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
jlbaker361/ddpogan_512_wikiart_2_0_0
jlbaker361
"2024-10-29T18:17:25Z"
30
0
diffusers
[ "diffusers", "safetensors", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2024-10-29T18:15:42Z"
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers pipeline that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
xiao12346/t5-large_PREFIX_TUNING_SEQ2SEQ_c2
xiao12346
"2023-07-31T07:12:25Z"
2
0
peft
[ "peft", "region:us" ]
null
"2023-07-31T07:11:47Z"
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0
MayBashendy/ArabicNewSplits7_usingALLEssays_FineTuningAraBERT_run2_AugV5_k18_task1_organization
MayBashendy
"2025-01-15T19:39:12Z"
183
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:aubmindlab/bert-base-arabertv02", "base_model:finetune:aubmindlab/bert-base-arabertv02", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-12-31T19:10:23Z"
--- library_name: transformers base_model: aubmindlab/bert-base-arabertv02 tags: - generated_from_trainer model-index: - name: ArabicNewSplits7_usingALLEssays_FineTuningAraBERT_run2_AugV5_k18_task1_organization results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ArabicNewSplits7_usingALLEssays_FineTuningAraBERT_run2_AugV5_k18_task1_organization This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7839 - Qwk: 0.7211 - Mse: 0.7839 - Rmse: 0.8854 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse | |:-------------:|:------:|:----:|:---------------:|:-------:|:------:|:------:| | No log | 0.0233 | 2 | 6.9214 | 0.0057 | 6.9214 | 2.6309 | | No log | 0.0465 | 4 | 4.7085 | 0.0576 | 4.7085 | 2.1699 | | No log | 0.0698 | 6 | 4.4413 | -0.0773 | 4.4413 | 2.1074 | | No log | 0.0930 | 8 | 3.7931 | -0.0305 | 3.7931 | 1.9476 | | No log | 0.1163 | 10 | 2.5133 | 0.1277 | 2.5133 | 1.5853 | | No log | 0.1395 | 12 | 2.1914 | 0.1583 | 2.1914 | 1.4803 | | No log | 0.1628 | 14 | 2.3688 | 0.0986 | 2.3688 | 1.5391 | | No log | 0.1860 | 16 | 2.2377 | 0.1460 | 2.2377 | 1.4959 | | No log | 0.2093 | 18 | 2.3143 | 0.1127 | 2.3143 | 1.5213 | | No log | 0.2326 | 20 | 2.3296 | 0.0694 | 2.3296 | 1.5263 | | No log | 0.2558 | 22 | 2.1241 | 0.2353 | 2.1241 | 1.4574 | | No log | 0.2791 | 24 | 1.8677 | 0.3252 | 1.8677 | 1.3666 | | No log | 0.3023 | 26 | 1.8738 | 0.2951 | 1.8738 | 1.3689 | | No log | 0.3256 | 28 | 1.9105 | 0.2656 | 1.9105 | 1.3822 | | No log | 0.3488 | 30 | 2.3289 | 0.1948 | 2.3289 | 1.5261 | | No log | 0.3721 | 32 | 3.0485 | 0.0973 | 3.0485 | 1.7460 | | No log | 0.3953 | 34 | 2.6915 | 0.1657 | 2.6915 | 1.6406 | | No log | 0.4186 | 36 | 2.1572 | 0.2778 | 2.1572 | 1.4687 | | No log | 0.4419 | 38 | 1.9552 | 0.2958 | 1.9552 | 1.3983 | | No log | 0.4651 | 40 | 1.6829 | 0.3556 | 1.6829 | 1.2973 | | No log | 0.4884 | 42 | 1.7847 | 0.3688 | 1.7847 | 1.3359 | | No log | 0.5116 | 44 | 2.5935 | 0.2637 | 2.5935 | 1.6104 | | No log | 0.5349 | 46 | 3.0697 | 0.2136 | 3.0697 | 1.7521 | | No log | 0.5581 | 48 | 2.6044 | 0.2784 | 2.6044 | 1.6138 | | No log | 0.5814 | 50 | 1.7888 | 0.4027 | 1.7888 | 1.3374 | | No log | 0.6047 | 52 | 1.4786 | 0.4234 | 1.4786 | 1.2160 | | No log | 0.6279 | 54 | 1.3988 | 0.4118 | 1.3988 | 1.1827 | | No log | 0.6512 | 56 | 1.5591 | 0.4342 | 1.5591 | 1.2486 | | No log | 0.6744 | 58 | 1.8523 | 0.4444 | 1.8523 | 1.3610 | | No log | 0.6977 | 60 | 2.2845 | 0.3518 | 2.2845 | 1.5115 | | No log | 0.7209 | 62 | 2.3085 | 0.3518 | 2.3085 | 1.5194 | | No log | 0.7442 | 64 | 1.7468 | 0.4390 | 1.7468 | 1.3217 | | No log | 0.7674 | 66 | 1.5372 | 0.4371 | 1.5372 | 1.2398 | | No log | 0.7907 | 68 | 1.3682 | 0.4672 | 1.3682 | 1.1697 | | No log | 0.8140 | 70 | 1.2000 | 0.4394 | 1.2000 | 1.0954 | | No log | 0.8372 | 72 | 1.2000 | 0.4928 | 1.2000 | 1.0955 | | No log | 0.8605 | 74 | 1.1761 | 0.4928 | 1.1761 | 1.0845 | | No log | 0.8837 | 76 | 1.2252 | 0.5235 | 1.2252 | 1.1069 | | No log | 0.9070 | 78 | 1.1528 | 0.5306 | 1.1528 | 1.0737 | | No log | 0.9302 | 80 | 1.2461 | 0.5588 | 1.2461 | 1.1163 | | No log | 0.9535 | 82 | 1.4374 | 0.4526 | 1.4374 | 1.1989 | | No log | 0.9767 | 84 | 1.8045 | 0.3824 | 1.8045 | 1.3433 | | No log | 1.0 | 86 | 2.0864 | 0.3043 | 2.0864 | 1.4444 | | No log | 1.0233 | 88 | 1.6521 | 0.4276 | 1.6521 | 1.2854 | | No log | 1.0465 | 90 | 1.4138 | 0.4762 | 1.4138 | 1.1890 | | No log | 1.0698 | 92 | 1.3945 | 0.4698 | 1.3945 | 1.1809 | | No log | 1.0930 | 94 | 1.4050 | 0.5059 | 1.4050 | 1.1853 | | No log | 1.1163 | 96 | 1.2043 | 0.5503 | 1.2043 | 1.0974 | | No log | 1.1395 | 98 | 1.0665 | 0.6447 | 1.0665 | 1.0327 | | No log | 1.1628 | 100 | 0.9289 | 0.6853 | 0.9289 | 0.9638 | | No log | 1.1860 | 102 | 0.8667 | 0.6714 | 0.8667 | 0.9310 | | No log | 1.2093 | 104 | 0.9596 | 0.6131 | 0.9596 | 0.9796 | | No log | 1.2326 | 106 | 1.0143 | 0.6043 | 1.0143 | 1.0071 | | No log | 1.2558 | 108 | 1.0076 | 0.5811 | 1.0076 | 1.0038 | | No log | 1.2791 | 110 | 1.1308 | 0.6437 | 1.1308 | 1.0634 | | No log | 1.3023 | 112 | 1.1854 | 0.6437 | 1.1854 | 1.0888 | | No log | 1.3256 | 114 | 1.0026 | 0.6581 | 1.0026 | 1.0013 | | No log | 1.3488 | 116 | 1.2347 | 0.4962 | 1.2347 | 1.1112 | | No log | 1.3721 | 118 | 1.2319 | 0.4580 | 1.2319 | 1.1099 | | No log | 1.3953 | 120 | 0.9769 | 0.6809 | 0.9769 | 0.9884 | | No log | 1.4186 | 122 | 1.0473 | 0.5594 | 1.0473 | 1.0234 | | No log | 1.4419 | 124 | 1.5104 | 0.5422 | 1.5104 | 1.2290 | | No log | 1.4651 | 126 | 1.6539 | 0.5652 | 1.6539 | 1.2861 | | No log | 1.4884 | 128 | 1.7611 | 0.5178 | 1.7611 | 1.3271 | | No log | 1.5116 | 130 | 1.3373 | 0.5889 | 1.3373 | 1.1564 | | No log | 1.5349 | 132 | 0.9954 | 0.675 | 0.9954 | 0.9977 | | No log | 1.5581 | 134 | 0.9561 | 0.6577 | 0.9561 | 0.9778 | | No log | 1.5814 | 136 | 0.8548 | 0.6800 | 0.8548 | 0.9245 | | No log | 1.6047 | 138 | 0.7988 | 0.7347 | 0.7988 | 0.8937 | | No log | 1.6279 | 140 | 0.8063 | 0.7260 | 0.8063 | 0.8980 | | No log | 1.6512 | 142 | 0.8574 | 0.7050 | 0.8574 | 0.9260 | | No log | 1.6744 | 144 | 0.8667 | 0.6471 | 0.8667 | 0.9310 | | No log | 1.6977 | 146 | 0.9518 | 0.6043 | 0.9518 | 0.9756 | | No log | 1.7209 | 148 | 1.2604 | 0.5890 | 1.2604 | 1.1227 | | No log | 1.7442 | 150 | 1.1710 | 0.6220 | 1.1710 | 1.0821 | | No log | 1.7674 | 152 | 0.8806 | 0.6944 | 0.8806 | 0.9384 | | No log | 1.7907 | 154 | 0.9354 | 0.6383 | 0.9354 | 0.9672 | | No log | 1.8140 | 156 | 0.9341 | 0.6383 | 0.9341 | 0.9665 | | No log | 1.8372 | 158 | 0.8217 | 0.7483 | 0.8217 | 0.9065 | | No log | 1.8605 | 160 | 0.8235 | 0.7152 | 0.8235 | 0.9075 | | No log | 1.8837 | 162 | 0.8404 | 0.7067 | 0.8404 | 0.9168 | | No log | 1.9070 | 164 | 1.0135 | 0.6867 | 1.0135 | 1.0067 | | No log | 1.9302 | 166 | 0.9277 | 0.7044 | 0.9277 | 0.9632 | | No log | 1.9535 | 168 | 0.7775 | 0.7483 | 0.7775 | 0.8818 | | No log | 1.9767 | 170 | 0.8253 | 0.7123 | 0.8253 | 0.9085 | | No log | 2.0 | 172 | 0.8583 | 0.7172 | 0.8583 | 0.9265 | | No log | 2.0233 | 174 | 0.9866 | 0.6294 | 0.9866 | 0.9933 | | No log | 2.0465 | 176 | 0.9988 | 0.6294 | 0.9988 | 0.9994 | | No log | 2.0698 | 178 | 0.9030 | 0.6331 | 0.9030 | 0.9503 | | No log | 2.0930 | 180 | 1.1081 | 0.5512 | 1.1081 | 1.0526 | | No log | 2.1163 | 182 | 1.1775 | 0.5079 | 1.1775 | 1.0851 | | No log | 2.1395 | 184 | 1.0642 | 0.5693 | 1.0642 | 1.0316 | | No log | 2.1628 | 186 | 0.9339 | 0.6286 | 0.9339 | 0.9664 | | No log | 2.1860 | 188 | 0.7740 | 0.76 | 0.7740 | 0.8798 | | No log | 2.2093 | 190 | 0.7640 | 0.76 | 0.7640 | 0.8741 | | No log | 2.2326 | 192 | 0.7728 | 0.7397 | 0.7728 | 0.8791 | | No log | 2.2558 | 194 | 0.7235 | 0.7733 | 0.7235 | 0.8506 | | No log | 2.2791 | 196 | 0.8083 | 0.7329 | 0.8083 | 0.8990 | | No log | 2.3023 | 198 | 1.0623 | 0.6424 | 1.0623 | 1.0307 | | No log | 2.3256 | 200 | 0.9732 | 0.6497 | 0.9732 | 0.9865 | | No log | 2.3488 | 202 | 0.8250 | 0.7355 | 0.8250 | 0.9083 | | No log | 2.3721 | 204 | 0.7252 | 0.7448 | 0.7252 | 0.8516 | | No log | 2.3953 | 206 | 0.7307 | 0.7448 | 0.7307 | 0.8548 | | No log | 2.4186 | 208 | 0.7369 | 0.7448 | 0.7369 | 0.8584 | | No log | 2.4419 | 210 | 0.8438 | 0.7226 | 0.8438 | 0.9186 | | No log | 2.4651 | 212 | 0.8246 | 0.7226 | 0.8246 | 0.9081 | | No log | 2.4884 | 214 | 0.7947 | 0.7436 | 0.7947 | 0.8914 | | No log | 2.5116 | 216 | 0.7182 | 0.7550 | 0.7182 | 0.8475 | | No log | 2.5349 | 218 | 0.7190 | 0.75 | 0.7190 | 0.8480 | | No log | 2.5581 | 220 | 0.7286 | 0.7534 | 0.7286 | 0.8536 | | No log | 2.5814 | 222 | 0.7319 | 0.7853 | 0.7319 | 0.8555 | | No log | 2.6047 | 224 | 0.9092 | 0.7232 | 0.9092 | 0.9535 | | No log | 2.6279 | 226 | 0.9222 | 0.6971 | 0.9222 | 0.9603 | | No log | 2.6512 | 228 | 0.8633 | 0.7160 | 0.8633 | 0.9291 | | No log | 2.6744 | 230 | 0.8549 | 0.6434 | 0.8549 | 0.9246 | | No log | 2.6977 | 232 | 0.8175 | 0.6986 | 0.8175 | 0.9042 | | No log | 2.7209 | 234 | 0.8120 | 0.6759 | 0.8120 | 0.9011 | | No log | 2.7442 | 236 | 0.8325 | 0.6759 | 0.8325 | 0.9124 | | No log | 2.7674 | 238 | 0.7906 | 0.6761 | 0.7906 | 0.8892 | | No log | 2.7907 | 240 | 0.8439 | 0.6176 | 0.8439 | 0.9186 | | No log | 2.8140 | 242 | 0.7759 | 0.7465 | 0.7759 | 0.8808 | | No log | 2.8372 | 244 | 0.7580 | 0.7310 | 0.7580 | 0.8706 | | No log | 2.8605 | 246 | 0.7574 | 0.7619 | 0.7574 | 0.8703 | | No log | 2.8837 | 248 | 0.7660 | 0.7259 | 0.7660 | 0.8752 | | No log | 2.9070 | 250 | 0.7403 | 0.7376 | 0.7403 | 0.8604 | | No log | 2.9302 | 252 | 0.7087 | 0.75 | 0.7087 | 0.8418 | | No log | 2.9535 | 254 | 0.7871 | 0.7950 | 0.7871 | 0.8872 | | No log | 2.9767 | 256 | 1.0804 | 0.6215 | 1.0804 | 1.0394 | | No log | 3.0 | 258 | 1.0441 | 0.6556 | 1.0441 | 1.0218 | | No log | 3.0233 | 260 | 0.7890 | 0.7586 | 0.7890 | 0.8883 | | No log | 3.0465 | 262 | 0.6987 | 0.8101 | 0.6987 | 0.8359 | | No log | 3.0698 | 264 | 0.7408 | 0.7260 | 0.7408 | 0.8607 | | No log | 3.0930 | 266 | 0.7429 | 0.7260 | 0.7429 | 0.8619 | | No log | 3.1163 | 268 | 0.7966 | 0.7211 | 0.7966 | 0.8925 | | No log | 3.1395 | 270 | 0.8970 | 0.6759 | 0.8970 | 0.9471 | | No log | 3.1628 | 272 | 0.8621 | 0.7083 | 0.8621 | 0.9285 | | No log | 3.1860 | 274 | 0.7669 | 0.7432 | 0.7669 | 0.8758 | | No log | 3.2093 | 276 | 0.7525 | 0.7483 | 0.7525 | 0.8675 | | No log | 3.2326 | 278 | 0.7208 | 0.7651 | 0.7208 | 0.8490 | | No log | 3.2558 | 280 | 0.6996 | 0.7763 | 0.6996 | 0.8364 | | No log | 3.2791 | 282 | 0.7091 | 0.7778 | 0.7091 | 0.8421 | | No log | 3.3023 | 284 | 0.6790 | 0.7898 | 0.6790 | 0.8240 | | No log | 3.3256 | 286 | 0.7430 | 0.7172 | 0.7430 | 0.8620 | | No log | 3.3488 | 288 | 0.9431 | 0.6176 | 0.9431 | 0.9711 | | No log | 3.3721 | 290 | 0.9969 | 0.6061 | 0.9969 | 0.9984 | | No log | 3.3953 | 292 | 0.8710 | 0.6324 | 0.8710 | 0.9333 | | No log | 3.4186 | 294 | 0.8159 | 0.6619 | 0.8159 | 0.9032 | | No log | 3.4419 | 296 | 0.8156 | 0.6714 | 0.8156 | 0.9031 | | No log | 3.4651 | 298 | 0.8070 | 0.6619 | 0.8070 | 0.8984 | | No log | 3.4884 | 300 | 0.7838 | 0.7172 | 0.7838 | 0.8853 | | No log | 3.5116 | 302 | 0.7649 | 0.7483 | 0.7649 | 0.8746 | | No log | 3.5349 | 304 | 0.8219 | 0.7117 | 0.8219 | 0.9066 | | No log | 3.5581 | 306 | 0.8618 | 0.7209 | 0.8618 | 0.9283 | | No log | 3.5814 | 308 | 0.7978 | 0.7586 | 0.7978 | 0.8932 | | No log | 3.6047 | 310 | 0.6648 | 0.7742 | 0.6648 | 0.8154 | | No log | 3.6279 | 312 | 0.6343 | 0.7703 | 0.6343 | 0.7964 | | No log | 3.6512 | 314 | 0.6481 | 0.7483 | 0.6481 | 0.8050 | | No log | 3.6744 | 316 | 0.6741 | 0.7483 | 0.6741 | 0.8210 | | No log | 3.6977 | 318 | 0.7242 | 0.7432 | 0.7242 | 0.8510 | | No log | 3.7209 | 320 | 0.8138 | 0.6232 | 0.8138 | 0.9021 | | No log | 3.7442 | 322 | 0.8209 | 0.7034 | 0.8209 | 0.9061 | | No log | 3.7674 | 324 | 0.8519 | 0.6986 | 0.8519 | 0.9230 | | No log | 3.7907 | 326 | 0.8428 | 0.6667 | 0.8428 | 0.9181 | | No log | 3.8140 | 328 | 0.9061 | 0.625 | 0.9061 | 0.9519 | | No log | 3.8372 | 330 | 0.8089 | 0.7075 | 0.8089 | 0.8994 | | No log | 3.8605 | 332 | 0.7229 | 0.7333 | 0.7229 | 0.8502 | | No log | 3.8837 | 334 | 0.6612 | 0.7397 | 0.6612 | 0.8131 | | No log | 3.9070 | 336 | 0.7223 | 0.7550 | 0.7223 | 0.8499 | | No log | 3.9302 | 338 | 0.8394 | 0.6892 | 0.8394 | 0.9162 | | No log | 3.9535 | 340 | 0.7153 | 0.7662 | 0.7153 | 0.8458 | | No log | 3.9767 | 342 | 0.6350 | 0.7448 | 0.6350 | 0.7969 | | No log | 4.0 | 344 | 0.6705 | 0.7376 | 0.6705 | 0.8188 | | No log | 4.0233 | 346 | 0.7278 | 0.7050 | 0.7278 | 0.8531 | | No log | 4.0465 | 348 | 0.7803 | 0.7101 | 0.7803 | 0.8833 | | No log | 4.0698 | 350 | 0.7929 | 0.7042 | 0.7929 | 0.8905 | | No log | 4.0930 | 352 | 0.7954 | 0.6950 | 0.7954 | 0.8918 | | No log | 4.1163 | 354 | 0.8614 | 0.6712 | 0.8614 | 0.9281 | | No log | 4.1395 | 356 | 0.8199 | 0.7114 | 0.8199 | 0.9055 | | No log | 4.1628 | 358 | 0.7479 | 0.7310 | 0.7479 | 0.8648 | | No log | 4.1860 | 360 | 0.7482 | 0.7361 | 0.7482 | 0.8650 | | No log | 4.2093 | 362 | 0.7884 | 0.7297 | 0.7884 | 0.8879 | | No log | 4.2326 | 364 | 0.8991 | 0.6803 | 0.8991 | 0.9482 | | No log | 4.2558 | 366 | 0.9318 | 0.6712 | 0.9318 | 0.9653 | | No log | 4.2791 | 368 | 0.8379 | 0.6897 | 0.8379 | 0.9154 | | No log | 4.3023 | 370 | 0.8511 | 0.6331 | 0.8511 | 0.9226 | | No log | 4.3256 | 372 | 0.9142 | 0.6232 | 0.9142 | 0.9562 | | No log | 4.3488 | 374 | 0.8086 | 0.6950 | 0.8086 | 0.8992 | | No log | 4.3721 | 376 | 0.7122 | 0.6809 | 0.7122 | 0.8439 | | No log | 4.3953 | 378 | 0.7045 | 0.6950 | 0.7045 | 0.8394 | | No log | 4.4186 | 380 | 0.7171 | 0.7183 | 0.7171 | 0.8468 | | No log | 4.4419 | 382 | 0.7327 | 0.7092 | 0.7327 | 0.8560 | | No log | 4.4651 | 384 | 0.7629 | 0.6906 | 0.7629 | 0.8734 | | No log | 4.4884 | 386 | 0.7902 | 0.6906 | 0.7902 | 0.8889 | | No log | 4.5116 | 388 | 0.8103 | 0.6906 | 0.8103 | 0.9002 | | No log | 4.5349 | 390 | 0.7645 | 0.6857 | 0.7645 | 0.8744 | | No log | 4.5581 | 392 | 0.7527 | 0.7682 | 0.7527 | 0.8676 | | No log | 4.5814 | 394 | 0.8013 | 0.7317 | 0.8013 | 0.8952 | | No log | 4.6047 | 396 | 0.7767 | 0.7613 | 0.7767 | 0.8813 | | No log | 4.6279 | 398 | 0.7491 | 0.7632 | 0.7491 | 0.8655 | | No log | 4.6512 | 400 | 0.7558 | 0.7467 | 0.7558 | 0.8694 | | No log | 4.6744 | 402 | 0.7361 | 0.7682 | 0.7361 | 0.8580 | | No log | 4.6977 | 404 | 0.7515 | 0.75 | 0.7515 | 0.8669 | | No log | 4.7209 | 406 | 0.8150 | 0.75 | 0.8150 | 0.9028 | | No log | 4.7442 | 408 | 0.8325 | 0.7294 | 0.8325 | 0.9124 | | No log | 4.7674 | 410 | 0.7761 | 0.7665 | 0.7761 | 0.8810 | | No log | 4.7907 | 412 | 0.8545 | 0.6842 | 0.8545 | 0.9244 | | No log | 4.8140 | 414 | 1.0428 | 0.6014 | 1.0428 | 1.0212 | | No log | 4.8372 | 416 | 1.1222 | 0.5507 | 1.1222 | 1.0593 | | No log | 4.8605 | 418 | 1.1090 | 0.5255 | 1.1090 | 1.0531 | | No log | 4.8837 | 420 | 0.8080 | 0.7133 | 0.8080 | 0.8989 | | No log | 4.9070 | 422 | 0.6121 | 0.75 | 0.6121 | 0.7824 | | No log | 4.9302 | 424 | 0.5943 | 0.7867 | 0.5943 | 0.7709 | | No log | 4.9535 | 426 | 0.6047 | 0.8 | 0.6047 | 0.7776 | | No log | 4.9767 | 428 | 0.6452 | 0.7919 | 0.6452 | 0.8033 | | No log | 5.0 | 430 | 0.6828 | 0.7755 | 0.6828 | 0.8263 | | No log | 5.0233 | 432 | 0.6657 | 0.7619 | 0.6657 | 0.8159 | | No log | 5.0465 | 434 | 0.6791 | 0.7534 | 0.6791 | 0.8241 | | No log | 5.0698 | 436 | 0.6974 | 0.7755 | 0.6974 | 0.8351 | | No log | 5.0930 | 438 | 0.6802 | 0.8052 | 0.6802 | 0.8248 | | No log | 5.1163 | 440 | 0.7529 | 0.7636 | 0.7529 | 0.8677 | | No log | 5.1395 | 442 | 0.7329 | 0.7831 | 0.7329 | 0.8561 | | No log | 5.1628 | 444 | 0.6842 | 0.8280 | 0.6842 | 0.8272 | | No log | 5.1860 | 446 | 0.7148 | 0.7875 | 0.7148 | 0.8455 | | No log | 5.2093 | 448 | 0.6530 | 0.8026 | 0.6530 | 0.8081 | | No log | 5.2326 | 450 | 0.6484 | 0.7919 | 0.6484 | 0.8052 | | No log | 5.2558 | 452 | 0.7111 | 0.7755 | 0.7111 | 0.8433 | | No log | 5.2791 | 454 | 0.7534 | 0.7651 | 0.7534 | 0.8680 | | No log | 5.3023 | 456 | 0.6590 | 0.7838 | 0.6590 | 0.8118 | | No log | 5.3256 | 458 | 0.6042 | 0.75 | 0.6042 | 0.7773 | | No log | 5.3488 | 460 | 0.6055 | 0.7413 | 0.6055 | 0.7782 | | No log | 5.3721 | 462 | 0.6236 | 0.7619 | 0.6236 | 0.7897 | | No log | 5.3953 | 464 | 0.6839 | 0.8 | 0.6839 | 0.8270 | | No log | 5.4186 | 466 | 0.7462 | 0.7083 | 0.7462 | 0.8638 | | No log | 5.4419 | 468 | 0.8124 | 0.6993 | 0.8124 | 0.9014 | | No log | 5.4651 | 470 | 0.8938 | 0.6176 | 0.8938 | 0.9454 | | No log | 5.4884 | 472 | 0.9300 | 0.6479 | 0.9300 | 0.9644 | | No log | 5.5116 | 474 | 0.8998 | 0.6479 | 0.8998 | 0.9486 | | No log | 5.5349 | 476 | 0.8624 | 0.7034 | 0.8624 | 0.9286 | | No log | 5.5581 | 478 | 0.8390 | 0.7034 | 0.8390 | 0.9160 | | No log | 5.5814 | 480 | 0.8662 | 0.6792 | 0.8662 | 0.9307 | | No log | 5.6047 | 482 | 0.9149 | 0.6296 | 0.9149 | 0.9565 | | No log | 5.6279 | 484 | 0.8757 | 0.6951 | 0.8757 | 0.9358 | | No log | 5.6512 | 486 | 0.7774 | 0.7425 | 0.7774 | 0.8817 | | No log | 5.6744 | 488 | 0.6936 | 0.7815 | 0.6936 | 0.8328 | | No log | 5.6977 | 490 | 0.7119 | 0.7950 | 0.7119 | 0.8437 | | No log | 5.7209 | 492 | 0.8170 | 0.6957 | 0.8170 | 0.9039 | | No log | 5.7442 | 494 | 0.8747 | 0.6452 | 0.8747 | 0.9352 | | No log | 5.7674 | 496 | 0.8677 | 0.6154 | 0.8677 | 0.9315 | | No log | 5.7907 | 498 | 0.8029 | 0.6901 | 0.8029 | 0.8961 | | 0.4007 | 5.8140 | 500 | 0.7108 | 0.7397 | 0.7108 | 0.8431 | | 0.4007 | 5.8372 | 502 | 0.7398 | 0.7172 | 0.7398 | 0.8601 | | 0.4007 | 5.8605 | 504 | 0.8247 | 0.7034 | 0.8247 | 0.9081 | | 0.4007 | 5.8837 | 506 | 0.8840 | 0.6528 | 0.8840 | 0.9402 | | 0.4007 | 5.9070 | 508 | 0.8619 | 0.6528 | 0.8619 | 0.9284 | | 0.4007 | 5.9302 | 510 | 0.7839 | 0.7211 | 0.7839 | 0.8854 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.0+cu118 - Datasets 2.21.0 - Tokenizers 0.19.1
Govind-Vurjana/en_pipeline
Govind-Vurjana
"2024-02-26T20:32:10Z"
0
0
spacy
[ "spacy", "token-classification", "en", "model-index", "region:us" ]
token-classification
"2024-02-26T20:12:42Z"
--- tags: - spacy - token-classification language: - en model-index: - name: en_pipeline results: - task: name: NER type: token-classification metrics: - name: NER Precision type: precision value: 0.9851430668 - name: NER Recall type: recall value: 0.9871347179 - name: NER F Score type: f_score value: 0.9861378867 --- | Feature | Description | | --- | --- | | **Name** | `en_pipeline` | | **Version** | `0.0.0` | | **spaCy** | `>=3.7.2,<3.8.0` | | **Default Pipeline** | `tok2vec`, `ner` | | **Components** | `tok2vec`, `ner` | | **Vectors** | 514157 keys, 514157 unique vectors (300 dimensions) | | **Sources** | n/a | | **License** | n/a | | **Author** | [n/a]() | ### Label Scheme <details> <summary>View label scheme (16 labels for 1 components)</summary> | Component | Labels | | --- | --- | | **`ner`** | `AGE GROUP`, `BENCHMARKS`, `CADAVERS/BIOMECHANICAL/NONCLINICAL`, `DEVICE`, `FOLLOW UP`, `INDICATION`, `LEVEL OF EVIDENCE `, `MANAGEMENT`, `PERFORMANCE OUTCOMES`, `RISK FACTORS`, `SAFETY OUTCOMES`, `SCORES`, `SEX`, `STATISTICAL SIGNIFICANCE`, `STRYKER`, `STUDY TYPE` | </details> ### Accuracy | Type | Score | | --- | --- | | `ENTS_F` | 98.61 | | `ENTS_P` | 98.51 | | `ENTS_R` | 98.71 | | `TOK2VEC_LOSS` | 44342.33 | | `NER_LOSS` | 204832.98 |
gautamgc17/llama3.2-vlm-torchtune
gautamgc17
"2024-11-29T21:15:48Z"
29
0
transformers
[ "transformers", "safetensors", "mllama", "image-text-to-text", "facebook", "meta", "pytorch", "llama", "llama-3", "conversational", "en", "de", "fr", "it", "pt", "hi", "es", "th", "arxiv:2204.05149", "license:llama3.2", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-text-to-text
"2024-11-29T21:04:32Z"
--- language: - en - de - fr - it - pt - hi - es - th library_name: transformers pipeline_tag: image-text-to-text tags: - facebook - meta - pytorch - llama - llama-3 widget: - example_title: rococo art messages: - role: user content: - type: text text: Which era does this piece belong to? Give details about the era. - type: image_url image_url: url: https://huggingface.co/datasets/huggingface/release-assets/resolve/main/rococo.jpg - role: assistant content: | The piece is a painting of a woman in a red dress, surrounded by gold and white ornate details. The woman is depicted in mid-air, with her arms outstretched and her legs bent at the knees. She is holding a bouquet of flowers in her right hand and a fruit in her left hand. The background of the painting is a light blue sky with pink clouds, and there are also some pink flowers and green leaves surrounding the woman. The overall atmosphere of the painting is one of joy and celebration, as if the woman is dancing or celebrating something. This piece belongs to the Rococo era, which was a style of art and architecture that emerged in Europe in the 18th century. The Rococo style is characterized by its use of pastel colors, curved lines, and ornate details. It was popularized during the reign of King Louis XV of France, who ruled from 1715 to 1774. - example_title: diagram understanding messages: - role: user content: - type: text text: Where do the droughts happen according to this diagram? - type: image_url image_url: url: https://huggingface.co/datasets/huggingface/release-assets/resolve/main/weather_events.png - role: assistant content: | # Drought Regions in 2016 According to the diagram, the regions that experienced severe droughts in 2016 are: - **Eastern and Southern Africa**: This region was severely affected by droughts, with significant impacts on agriculture and water resources. - **Western and Southern Africa**: Droughts also occurred in this region, leading to food insecurity and other challenges. These regions were particularly vulnerable to droughts in 2016, highlighting the need for effective water management strategies and climate resilience measures. - example_title: document qa messages: - role: user content: - type: text text: How long does it take from invoice date to due date? Be short and concise. - type: image_url image_url: url: https://huggingface.co/datasets/huggingface/release-assets/resolve/main/invoice.png - role: assistant content: | To determine the time gap between the invoice date and the due date, we need to calculate the difference between these two dates. Invoice Date: February 11, 2019 Due Date: February 26, 2019 Step 1: Calculate the Difference in Days February 26 - February 11 = 15 days Therefore, the time gap between the invoice date and the due date is 15 days. license: llama3.2 extra_gated_prompt: >- ### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT Llama 3.2 Version Release Date: September 25, 2024 “Agreement” means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein. “Documentation” means the specifications, manuals and documentation accompanying Llama 3.2 distributed by Meta at https://llama.meta.com/doc/overview. “Licensee” or “you” means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf. “Llama 3.2” means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://www.llama.com/llama-downloads. “Llama Materials” means, collectively, Meta’s proprietary Llama 3.2 and Documentation (and any portion thereof) made available under this Agreement. “Meta” or “we” means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). By clicking “I Accept” below or by using or distributing any portion or element of the Llama Materials, you agree to be bound by this Agreement. 1. License Rights and Redistribution. a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials. b. Redistribution and Use. i. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service (including another AI model) that contains any of them, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display “Built with Llama” on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials or any outputs or results of the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include “Llama” at the beginning of any such AI model name. ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you. iii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a “Notice” text file distributed as a part of such copies: “Llama 3.2 is licensed under the Llama 3.2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.” iv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://www.llama.com/llama3_2/use-policy), which is hereby incorporated by reference into this Agreement. 2. Additional Commercial Terms. If, on the Llama 3.2 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights. 3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS. 4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING. 5. Intellectual Property. a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use “Llama” (the “Mark”) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising out of your use of the Mark will inure to the benefit of Meta. b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications. c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials. 6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement. 7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement. ### Llama 3.2 Acceptable Use Policy Meta is committed to promoting safe and fair use of its tools and features, including Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use Policy (“**Policy**”). The most recent copy of this policy can be found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy). #### Prohibited Uses We want everyone to use Llama 3.2 safely and responsibly. You agree you will not use, or allow others to use, Llama 3.2 to: 1. Violate the law or others’ rights, including to: 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as: 1. Violence or terrorism 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material 3. Human trafficking, exploitation, and sexual violence 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials. 5. Sexual solicitation 6. Any other criminal activity 1. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals 2. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services 3. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices 4. Collect, process, disclose, generate, or infer private or sensitive information about individuals, including information about individuals’ identity, health, or demographic information, unless you have obtained the right to do so in accordance with applicable law 5. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials 6. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system 7. Engage in any action, or facilitate any action, to intentionally circumvent or remove usage restrictions or other safety measures, or to enable functionality disabled by Meta  2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Llama 3.2 related to the following: 8. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989 or the Chemical Weapons Convention Implementation Act of 1997 9. Guns and illegal weapons (including weapon development) 10. Illegal drugs and regulated/controlled substances 11. Operation of critical infrastructure, transportation technologies, or heavy machinery 12. Self-harm or harm to others, including suicide, cutting, and eating disorders 13. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual 3. Intentionally deceive or mislead others, including use of Llama 3.2 related to the following: 14. Generating, promoting, or furthering fraud or the creation or promotion of disinformation 15. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content 16. Generating, promoting, or further distributing spam 17. Impersonating another individual without consent, authorization, or legal right 18. Representing that the use of Llama 3.2 or outputs are human-generated 19. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement  4. Fail to appropriately disclose to end users any known dangers of your AI system 5. Interact with third party tools, models, or software designed to generate unlawful content or engage in unlawful or harmful conduct and/or represent that the outputs of such tools, models, or software are associated with Meta or Llama 3.2 With respect to any multimodal models included in Llama 3.2, the rights granted under Section 1(a) of the Llama 3.2 Community License Agreement are not being granted to you if you are an individual domiciled in, or a company with a principal place of business in, the European Union. This restriction does not apply to end users of a product or service that incorporates any such multimodal models. Please report any violation of this Policy, software “bug,” or other problems that could lead to a violation of this Policy through one of the following means: * Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ) * Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) * Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) * Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama 3.2: [email protected] extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text Job title: type: select options: - Student - Research Graduate - AI researcher - AI developer/engineer - Reporter - Other geo: ip_location By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox extra_gated_description: >- The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit extra_gated_eu_disallowed: true --- ## Model Information The Llama 3.2-Vision collection of multimodal large language models (LLMs) is a collection of pretrained and instruction-tuned image reasoning generative models in 11B and 90B sizes (text \+ images in / text out). The Llama 3.2-Vision instruction-tuned models are optimized for visual recognition, image reasoning, captioning, and answering general questions about an image. The models outperform many of the available open source and closed multimodal models on common industry benchmarks. **Model Developer**: Meta **Model Architecture:** Llama 3.2-Vision is built on top of Llama 3.1 text-only model, which is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. To support image recognition tasks, the Llama 3.2-Vision model uses a separately trained vision adapter that integrates with the pre-trained Llama 3.1 language model. The adapter consists of a series of cross-attention layers that feed image encoder representations into the core LLM. | | Training Data | Params | Input modalities | Output modalities | Context length | GQA | Data volume | Knowledge cutoff | | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | | Llama 3.2-Vision | (Image, text) pairs | 11B (10.6) | Text \+ Image | Text | 128k | Yes | 6B (image, text) pairs | December 2023 | | Llama 3.2-Vision | (Image, text) pairs | 90B (88.8) | Text \+ Image | Text | 128k | Yes | 6B (image, text) pairs | December 2023 | **Supported Languages:** For text only tasks, English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Note for image+text applications, English is the only language supported. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly. **Llama 3.2 Model Family:** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date:** Sept 25, 2024 **Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety. **License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement). **Feedback:** Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama-models/tree/main/models/llama3_2). For more technical information about generation parameters and recipes for how to use Llama 3.2-Vision in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases:** Llama 3.2-Vision is intended for commercial and research use. Instruction tuned models are intended for visual recognition, image reasoning, captioning, and assistant-like chat with images, whereas pretrained models can be adapted for a variety of image reasoning tasks. Additionally, because of Llama 3.2-Vision’s ability to take images and text as inputs, additional use cases could include: 1. Visual Question Answering (VQA) and Visual Reasoning: Imagine a machine that looks at a picture and understands your questions about it. 2. Document Visual Question Answering (DocVQA): Imagine a computer understanding both the text and layout of a document, like a map or contract, and then answering questions about it directly from the image. 3. Image Captioning: Image captioning bridges the gap between vision and language, extracting details, understanding the scene, and then crafting a sentence or two that tells the story. 4. Image-Text Retrieval: Image-text retrieval is like a matchmaker for images and their descriptions. Similar to a search engine but one that understands both pictures and words. 5. Visual Grounding: Visual grounding is like connecting the dots between what we see and say. It’s about understanding how language references specific parts of an image, allowing AI models to pinpoint objects or regions based on natural language descriptions. The Llama 3.2 model collection also supports the ability to leverage the outputs of its models to improve other models including synthetic data generation and distillation. The Llama 3.2 Community License allows for these use cases. **Out of Scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.2 Community License. Use in languages beyond those explicitly referenced as supported in this model card. ## How to use This repository contains two versions of Llama-3.2-11B-Vision-Instruct, for use with transformers and with the original `llama` codebase. ### Use with transformers Starting with transformers >= 4.45.0 onward, you can run inference using conversational messages that may include an image you can query about. Make sure to update your transformers installation via `pip install --upgrade transformers`. ```python import requests import torch from PIL import Image from transformers import MllamaForConditionalGeneration, AutoProcessor model_id = "meta-llama/Llama-3.2-11B-Vision-Instruct" model = MllamaForConditionalGeneration.from_pretrained( model_id, torch_dtype=torch.bfloat16, device_map="auto", ) processor = AutoProcessor.from_pretrained(model_id) url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/0052a70beed5bf71b92610a43a52df6d286cd5f3/diffusers/rabbit.jpg" image = Image.open(requests.get(url, stream=True).raw) messages = [ {"role": "user", "content": [ {"type": "image"}, {"type": "text", "text": "If I had to write a haiku for this one, it would be: "} ]} ] input_text = processor.apply_chat_template(messages, add_generation_prompt=True) inputs = processor( image, input_text, add_special_tokens=False, return_tensors="pt" ).to(model.device) output = model.generate(**inputs, max_new_tokens=30) print(processor.decode(output[0])) ``` ### Use with `llama` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama). To download the original checkpoints, you can use `huggingface-cli` as follows: ``` huggingface-cli download meta-llama/Llama-3.2-11B-Vision-Instruct --include "original/*" --local-dir Llama-3.2-11B-Vision-Instruct ``` ## Hardware and Software **Training Factors:** We used custom training libraries, Meta's custom built GPU cluster, and production infrastructure for pretraining. Fine-tuning, annotation, and evaluation were also performed on production infrastructure. **Training Energy Use:** Training utilized a cumulative of **2.02M** GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency. ## **Training Greenhouse Gas Emissions:** Estimated total location-based greenhouse gas emissions were **584** tons CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with renewable energy, therefore the total market-based greenhouse gas emissions for training were 0 tons CO2eq. | | Training Time (GPU hours) | Training Power Consumption (W) | Training Location-Based Greenhouse Gas Emissions (tons CO2eq) | Training Market-Based Greenhouse Gas Emissions (tons CO2eq) | | :---- | :---: | :---: | :---: | :---: | | Llama 3.2-vision 11B | Stage 1 pretraining: 147K H100 hours Stage 2 annealing: 98K H100 hours SFT: 896 H100 hours RLHF: 224 H100 hours | 700 | 71 | 0 | | Llama 3.2-vision 90B | Stage 1 pretraining: 885K H100 hours Stage 2 annealing: 885K H100 hours SFT: 3072 H100 hours RLHF: 2048 H100 hours | 700 | 513 | 0 | | Total | 2.02M | | 584 | 0 | The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others. ## Training Data **Overview:** Llama 3.2-Vision was pretrained on 6B image and text pairs. The instruction tuning data includes publicly available vision instruction datasets, as well as over 3M synthetically generated examples. **Data Freshness:** The pretraining data has a cutoff of December 2023\. ## Benchmarks \- Image Reasoning In this section, we report the results for Llama 3.2-Vision models on standard automatic benchmarks. For all these evaluations, we used our internal evaluations library. ### Base Pretrained Models | Category | Benchmark | \# Shots | Metric | Llama 3.2 11B | Llama 3.2 90B | | ----- | ----- | ----- | ----- | ----- | ----- | | Image Understanding | VQAv2 (val) | 0 | Accuracy | 66.8 | 73.6 | | | Text VQA (val) | 0 | Relaxed accuracy | 73.1 | 73.5 | | | DocVQA (val, unseen) | 0 | ANLS | 62.3 | 70.7 | | Visual Reasoning | MMMU (val, 0-shot) | 0 | Micro average accuracy | 41.7 | 49.3 | | | ChartQA (test) | 0 | Accuracy | 39.4 | 54.2 | | | InfographicsQA (val, unseen) | 0 | ANLS | 43.2 | 56.8 | | | AI2 Diagram (test) | 0 | Accuracy | 62.4 | 75.3 | ### Instruction Tuned Models | Modality | Capability | Benchmark | \# Shots | Metric | Llama 3.2 11B | Llama 3.2 90B | | ----- | :---: | ----- | :---: | :---: | ----- | ----- | | Image | College-level Problems and Mathematical Reasoning | MMMU (val, CoT) | 0 | Micro average accuracy | 50.7 | 60.3 | | | | MMMU-Pro, Standard (10 opts, test) | 0 | Accuracy | 33.0 | 45.2 | | | | MMMU-Pro, Vision (test) | 0 | Accuracy | 23.7 | 33.8 | | | | MathVista (testmini) | 0 | Accuracy | 51.5 | 57.3 | | | Charts and Diagram Understanding | ChartQA (test, CoT) | 0 | Relaxed accuracy | 83.4 | 85.5 | | | | AI2 Diagram (test) | 0 | Accuracy | 91.1 | 92.3 | | | | DocVQA (test) | 0 | ANLS | 88.4 | 90.1 | | | General Visual Question Answering | VQAv2 (test) | 0 | Accuracy | 75.2 | 78.1 | | | | | | | | | | Text | General | MMLU (CoT) | 0 | Macro\_avg/acc | 73.0 | 86.0 | | | Math | MATH (CoT) | 0 | Final\_em | 51.9 | 68.0 | | | Reasoning | GPQA | 0 | Accuracy | 32.8 | 46.7 | | | Multilingual | MGSM (CoT) | 0 | em | 68.9 | 86.9 | ## Responsibility & Safety As part of our Responsible release approach, we followed a three-pronged strategy to managing trust & safety risks: 1. Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama. 2. Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm. 3. Provide protections for the community to help prevent the misuse of our models. ### Responsible Deployment **Approach:** Llama is a foundational technology designed to be used in a variety of use cases, examples on how Meta’s Llama models have been responsibly deployed can be found in our [Community Stories webpage](https://llama.meta.com/community-stories/). Our approach is to build the most helpful models enabling the world to benefit from the technology power, by aligning our model safety for the generic use cases addressing a standard set of harms. Developers are then in the driver seat to tailor safety for their use case, defining their own policy and deploying the models with the necessary safeguards in their Llama systems. Llama 3.2 was developed following the best practices outlined in our Responsible Use Guide, you can refer to the [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to learn more. #### Llama 3.2 Instruct **Objective:** Our main objectives for conducting safety fine-tuning are to provide the research community with a valuable resource for studying the robustness of safety fine-tuning, as well as to offer developers a readily available, safe, and powerful model for various applications to reduce the developer workload to deploy safe AI systems. We implemented the same set of safety mitigations as in Llama 3, and you can learn more about these in the Llama 3 [paper](https://ai.meta.com/research/publications/the-llama-3-herd-of-models/). **Fine-Tuning Data:** We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. We’ve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control. **Refusals and Tone:** Building on the work we started with Llama 3, we put a great emphasis on model refusals to benign prompts as well as refusal tone. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines. #### Llama 3.2 Systems **Safety as a System:** Large language models, including Llama 3.2, **are not designed to be deployed in isolation** but instead should be deployed as part of an overall AI system with additional safety guardrails as required. Developers are expected to deploy system safeguards when building agentic systems. Safeguards are key to achieve the right helpfulness-safety alignment as well as mitigating safety and security risks inherent to the system and any integration of the model or system with external tools. As part of our responsible release approach, we provide the community with [safeguards](https://llama.meta.com/trust-and-safety/) that developers should deploy with Llama models or other LLMs, including Llama Guard, Prompt Guard and Code Shield. All our [reference implementations](https://github.com/meta-llama/llama-agentic-system) demos contain these safeguards by default so developers can benefit from system-level safety out-of-the-box. ### New Capabilities and Use Cases **Technological Advancement:** Llama releases usually introduce new capabilities that require specific considerations in addition to the best practices that generally apply across all Generative AI use cases. For prior release capabilities also supported by Llama 3.2, see [Llama 3.1 Model Card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/MODEL_CARD.md), as the same considerations apply here as well., **Image Reasoning:** Llama 3.2-Vision models come with multimodal (text and image) input capabilities enabling image reasoning applications. As part of our responsible release process, we took dedicated measures including evaluations and mitigations to address the risk of the models uniquely identifying individuals in images. As with other LLM risks, models may not always be robust to adversarial prompts, and developers should evaluate identification and other applicable risks in the context of their applications as well as consider deploying Llama Guard 3-11B-Vision as part of their system or other mitigations as appropriate to detect and mitigate such risks. ### Evaluations **Scaled Evaluations:** We built dedicated, adversarial evaluation datasets and evaluated systems composed of Llama models and Purple Llama safeguards to filter input prompt and output response. It is important to evaluate applications in context, and we recommend building dedicated evaluation dataset for your use case. **Red teaming:** We conducted recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we used the learnings to improve our benchmarks and safety tuning datasets. We partnered early with subject-matter experts in critical risk areas to understand the nature of these real-world harms and how such models may lead to unintended harm for society. Based on these conversations, we derived a set of adversarial goals for the red team to attempt to achieve, such as extracting harmful information or reprogramming the model to act in a potentially harmful capacity. The red team consisted of experts in cybersecurity, adversarial machine learning, responsible AI, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets. ### Critical Risks In addition to our safety work above, we took extra care on measuring and/or mitigating the following critical risk areas: **1\. CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive Weapons):** For Llama 3.1, to assess risks related to proliferation of chemical and biological weapons, we performed uplift testing designed to assess whether use of Llama 3.1 models could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons. For Llama 3.2-Vision models, we conducted additional targeted evaluations and found that it was unlikely Llama 3.2 presented an increase in scientific capabilities due to its added image understanding capability as compared to Llama 3.1. **2\. Child Safety:** Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors including the additional languages Llama 3 is trained on. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. **3\. Cyber Attacks:** For Llama 3.1 405B, our cyber attack uplift study investigated whether LLMs can enhance human capabilities in hacking tasks, both in terms of skill level and speed. Our attack automation study focused on evaluating the capabilities of LLMs when used as autonomous agents in cyber offensive operations, specifically in the context of ransomware attacks. This evaluation was distinct from previous studies that considered LLMs as interactive assistants. The primary objective was to assess whether these models could effectively function as independent agents in executing complex cyber-attacks without human intervention. Because Llama 3.2’s vision capabilities are not generally germane to cyber uplift, we believe that the testing conducted for Llama 3.1 also applies to Llama 3.2. ### Community **Industry Partnerships:** Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership on AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). **Grants:** We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Meta’s Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists). **Reporting:** Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations **Values:** The core values of Llama 3.2 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3.2 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. **Testing:** But Llama 3.2 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3.2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.2 models, developers should perform safety testing and tuning tailored to their specific applications of the model. Please refer to available resources including our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide), [Trust and Safety](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more about responsible development.
bruhzair/Magni-t1-123b
bruhzair
"2025-02-23T02:24:35Z"
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "conversational", "arxiv:2403.19522", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-02-23T01:24:13Z"
--- base_model: [] library_name: transformers tags: - mergekit - merge --- # Magni This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using /workspace/cache/models--anthracite-org--magnum-v4-123b/snapshots/68fdd395bf5282429aa11d3b2737add1944243b3 as a base. ### Models Merged The following models were included in the merge: * /workspace/Mutant-t6 * /workspace/cache/models--bruhzair--Behemoth-Magnum-v4-SLERP-123b/snapshots/605e4c7b0ea17bd8afb6fcddefbd40235329c2b7 ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: /workspace/cache/models--anthracite-org--magnum-v4-123b/snapshots/68fdd395bf5282429aa11d3b2737add1944243b3 dtype: float16 merge_method: model_stock slices: - sources: - layer_range: [0, 88] model: /workspace/Mutant-t6 - layer_range: [0, 88] model: /workspace/cache/models--bruhzair--Behemoth-Magnum-v4-SLERP-123b/snapshots/605e4c7b0ea17bd8afb6fcddefbd40235329c2b7 - layer_range: [0, 88] model: /workspace/cache/models--anthracite-org--magnum-v4-123b/snapshots/68fdd395bf5282429aa11d3b2737add1944243b3 ```
jjaegii/Llama-3.1-8B-LoRA-kolon-sg-v2-merged
jjaegii
"2024-10-15T09:08:13Z"
30
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-10-15T09:03:21Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Ksgk-fy/genius_merge_v1
Ksgk-fy
"2024-06-20T15:15:16Z"
6
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-06-20T15:12:15Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
speaches-ai/piper-en_US-bryce-medium
speaches-ai
"2025-03-20T04:48:09Z"
0
0
null
[ "onnx", "speaches", "piper", "text-to-speech", "en", "region:us" ]
text-to-speech
"2025-03-20T04:48:06Z"
--- language: en pipeline_tag: text-to-speech tags: - speaches - piper library: onnx --- Run this model using [speaches](https://github.com/speaches-ai/speaches)
JOSESMOKE/tear_17
JOSESMOKE
"2025-02-22T18:24:01Z"
0
0
null
[ "onnx", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
"2025-02-22T17:56:03Z"
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
charlieoneill/falcon-abstracts
charlieoneill
"2023-07-17T06:29:06Z"
0
0
null
[ "tensorboard", "generated_from_trainer", "license:apache-2.0", "region:us" ]
null
"2023-07-17T00:55:24Z"
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: falcon-abstracts results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # falcon-abstracts This model is a fine-tuned version of [tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - training_steps: 2500 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
sail-rvc/paulwilliams
sail-rvc
"2023-07-14T07:42:33Z"
1
0
transformers
[ "transformers", "rvc", "sail-rvc", "audio-to-audio", "endpoints_compatible", "region:us" ]
audio-to-audio
"2023-07-14T07:42:06Z"
--- pipeline_tag: audio-to-audio tags: - rvc - sail-rvc --- # paulwilliams ## RVC Model ![banner](https://i.imgur.com/xocCjhH.jpg) This model repo was automatically generated. Date: 2023-07-14 07:42:33 Bot Name: juuxnscrap Model Type: RVC Source: https://huggingface.co/juuxn/RVCModels/ Reason: Converting into loadable format for https://github.com/chavinlo/rvc-runpod
1-Girl-15-Hands-Viral-X/FULL.One-Girl-15-Hands.Viral.Video.On.Social.Media.X
1-Girl-15-Hands-Viral-X
"2025-02-20T19:24:06Z"
0
0
null
[ "region:us" ]
null
"2025-02-20T19:23:56Z"
[🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)](https://lekedvideo.xyz/watch/) [🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )](https://lekedvideo.xyz/watch/) [<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://lekedvideo.xyz/watch/)
LATEiimas/roberta-base-entity-roles-multiclassification-en
LATEiimas
"2025-02-13T04:28:34Z"
0
0
transformers
[ "transformers", "safetensors", "roberta", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2025-02-13T04:13:37Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
botenius/43ad51c3-1fdd-427f-ae65-45dd2bb9af11
botenius
"2025-02-05T14:23:35Z"
6
0
peft
[ "peft", "safetensors", "falcon", "axolotl", "generated_from_trainer", "custom_code", "base_model:tiiuae/falcon-rw-1b", "base_model:adapter:tiiuae/falcon-rw-1b", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
"2025-02-05T13:47:29Z"
--- library_name: peft license: apache-2.0 base_model: tiiuae/falcon-rw-1b tags: - axolotl - generated_from_trainer model-index: - name: 43ad51c3-1fdd-427f-ae65-45dd2bb9af11 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: tiiuae/falcon-rw-1b bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 5ee86139ebe178b8_train_data.json ds_type: json format: custom path: /workspace/input_data/5ee86139ebe178b8_train_data.json type: field_instruction: premise field_output: hypothesis format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: null eval_batch_size: 2 eval_max_new_tokens: 128 eval_steps: null eval_table_size: null evals_per_epoch: null flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: true hub_model_id: botenius/43ad51c3-1fdd-427f-ae65-45dd2bb9af11 hub_repo: null hub_strategy: end hub_token: null learning_rate: 0.0001 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 500 micro_batch_size: 2 mlflow_experiment_name: /tmp/5ee86139ebe178b8_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: null saves_per_epoch: null sequence_len: 1024 special_tokens: pad_token: <|endoftext|> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: techspear-hub wandb_mode: online wandb_name: fb9c5b1f-fac7-4b59-97f7-11d976fca76d wandb_project: Gradients-On-13 wandb_run: your_name wandb_runid: fb9c5b1f-fac7-4b59-97f7-11d976fca76d warmup_steps: 5 weight_decay: 0.01 xformers_attention: null ``` </details><br> # 43ad51c3-1fdd-427f-ae65-45dd2bb9af11 This model is a fine-tuned version of [tiiuae/falcon-rw-1b](https://huggingface.co/tiiuae/falcon-rw-1b) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8866 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 500 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 3.8819 | 0.0393 | 500 | 0.8866 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
error577/7a3ea88d-0b5f-4a18-884e-fbabaaf41994
error577
"2025-02-12T06:27:00Z"
0
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2-1.5B", "base_model:adapter:unsloth/Qwen2-1.5B", "license:apache-2.0", "4-bit", "bitsandbytes", "region:us" ]
null
"2025-02-12T03:47:52Z"
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2-1.5B tags: - axolotl - generated_from_trainer model-index: - name: 7a3ea88d-0b5f-4a18-884e-fbabaaf41994 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: qlora auto_resume_from_checkpoints: true base_model: unsloth/Qwen2-1.5B bf16: auto chat_template: llama3 dataloader_num_workers: 6 dataset_prepared_path: null datasets: - data_files: - b59526d9eef1999c_train_data.json ds_type: json format: custom path: /workspace/input_data/b59526d9eef1999c_train_data.json type: field_input: facts field_instruction: prompt_serial field_output: hypothesis format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: 3 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 16 gradient_checkpointing: true group_by_length: true hub_model_id: error577/7a3ea88d-0b5f-4a18-884e-fbabaaf41994 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0003 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.3 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 128 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_steps: 1000 micro_batch_size: 1 mlflow_experiment_name: /tmp/b59526d9eef1999c_train_data.json model_type: AutoModelForCausalLM num_epochs: 50 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.002 wandb_entity: null wandb_mode: online wandb_name: a5f28714-d661-40e7-bdf5-8866392be325 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: a5f28714-d661-40e7-bdf5-8866392be325 warmup_steps: 10 weight_decay: 0.01 xformers_attention: null ``` </details><br> # 7a3ea88d-0b5f-4a18-884e-fbabaaf41994 This model is a fine-tuned version of [unsloth/Qwen2-1.5B](https://huggingface.co/unsloth/Qwen2-1.5B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1342 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.3433 | 0.0001 | 1 | 2.3670 | | 0.0248 | 0.0073 | 50 | 0.3307 | | 0.0012 | 0.0146 | 100 | 0.3740 | | 0.0042 | 0.0219 | 150 | 0.4128 | | 0.0002 | 0.0292 | 200 | 0.2704 | | 0.0008 | 0.0365 | 250 | 0.1978 | | 0.0003 | 0.0438 | 300 | 0.2199 | | 0.0003 | 0.0511 | 350 | 0.1445 | | 0.0001 | 0.0584 | 400 | 0.1306 | | 0.0002 | 0.0657 | 450 | 0.1239 | | 0.0106 | 0.0730 | 500 | 0.1232 | | 0.0005 | 0.0803 | 550 | 0.1643 | | 0.0002 | 0.0876 | 600 | 0.1736 | | 0.0 | 0.0949 | 650 | 0.1342 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
AkhilTolani/parler-tts-music-refined-combined-dataset-190000
AkhilTolani
"2024-08-09T20:02:05Z"
5
0
transformers
[ "transformers", "safetensors", "parler_tts", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
"2024-08-09T19:58:56Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
powermove72/GK-MoE-0.1
powermove72
"2024-06-03T22:11:56Z"
7
0
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "moe", "frankenmoe", "merge", "mergekit", "lazymergekit", "argilla/notus-7b-v1", "GritLM/GritLM-7B", "conversational", "custom_code", "base_model:GritLM/GritLM-7B", "base_model:merge:GritLM/GritLM-7B", "base_model:argilla/notus-7b-v1", "base_model:merge:argilla/notus-7b-v1", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-06-03T20:16:34Z"
--- license: apache-2.0 tags: - moe - frankenmoe - merge - mergekit - lazymergekit - argilla/notus-7b-v1 - GritLM/GritLM-7B base_model: - argilla/notus-7b-v1 - GritLM/GritLM-7B --- # GK-MoE-0.1 GK-MoE-0.1 is a Mixture of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [argilla/notus-7b-v1](https://huggingface.co/argilla/notus-7b-v1) * [GritLM/GritLM-7B](https://huggingface.co/GritLM/GritLM-7B) ## 🧩 Configuration ```yaml base_model: GritLM/GritLM-7B experts: - source_model: argilla/notus-7b-v1 positive_prompts: - "chat" - "assistant" - "tell me" - "explain" - "I want" - "reason" - "math" - "mathematics" - "solve" - "count" - source_model: GritLM/GritLM-7B positive_prompts: - "code" - "VB.NET" - "vb.net" - "programming" - "algorithm" - "develop" ``` ## 💻 Usage ```python !pip install -qU transformers bitsandbytes accelerate from transformers import AutoTokenizer import transformers import torch model = "powermove72/GK-MoE-0.1" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True}, ) messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}] prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
srushtisingh/EthicalSFTModel
srushtisingh
"2024-06-09T21:48:33Z"
3
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-06-09T19:02:12Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MoGP/f_x_balanced
MoGP
"2024-06-03T11:33:16Z"
105
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-06-03T11:03:47Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MissXie/Llama-3.2-1B-HW1
MissXie
"2025-03-08T06:10:35Z"
0
0
null
[ "safetensors", "llama", "trl", "sft", "license:apache-2.0", "4-bit", "bitsandbytes", "region:us" ]
null
"2025-03-08T01:24:04Z"
--- license: apache-2.0 tags: - trl - sft ---
Lily-Phillips-101-Challenge-Video-4K/FULL.Lily.Phillips.101.Challenge.Video.Viral.Video.On.Social.Media.X
Lily-Phillips-101-Challenge-Video-4K
"2025-02-28T18:47:18Z"
0
0
null
[ "region:us" ]
null
"2025-02-28T18:47:12Z"
<!-- HTML_TAG_END --><div> <p><a rel="nofollow" href="https://japantvshow.com/viral-video/?v=Lily+Phillips">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)</a></p> <p><a rel="nofollow" href="https://japantvshow.com/viral-video/?v=Lily+Phillips">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a></p> <p><a rel="nofollow" href="https://japantvshow.com/viral-video/?v=Lily+Phillips"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a></p> <!-- HTML_TAG_END --></div>
tensorblock/POLAR-14B-DPO-v1.4-GGUF
tensorblock
"2024-11-16T01:44:07Z"
13
0
transformers
[ "transformers", "gguf", "TensorBlock", "GGUF", "text-generation", "ko", "base_model:x2bee/POLAR-14B-DPO-v1.4", "base_model:quantized:x2bee/POLAR-14B-DPO-v1.4", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-generation
"2024-11-14T19:35:32Z"
--- library_name: transformers license: apache-2.0 language: - ko pipeline_tag: text-generation tags: - TensorBlock - GGUF base_model: x2bee/POLAR-14B-DPO-v1.4 --- <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"> Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a> </p> </div> </div> ## x2bee/POLAR-14B-DPO-v1.4 - GGUF This repo contains GGUF format model files for [x2bee/POLAR-14B-DPO-v1.4](https://huggingface.co/x2bee/POLAR-14B-DPO-v1.4). The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d). <div style="text-align: left; margin: 20px 0;"> <a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;"> Run them on the TensorBlock client using your local machine ↗ </a> </div> ## Prompt template ``` ``` ## Model file specification | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [POLAR-14B-DPO-v1.4-Q2_K.gguf](https://huggingface.co/tensorblock/POLAR-14B-DPO-v1.4-GGUF/blob/main/POLAR-14B-DPO-v1.4-Q2_K.gguf) | Q2_K | 4.924 GB | smallest, significant quality loss - not recommended for most purposes | | [POLAR-14B-DPO-v1.4-Q3_K_S.gguf](https://huggingface.co/tensorblock/POLAR-14B-DPO-v1.4-GGUF/blob/main/POLAR-14B-DPO-v1.4-Q3_K_S.gguf) | Q3_K_S | 5.741 GB | very small, high quality loss | | [POLAR-14B-DPO-v1.4-Q3_K_M.gguf](https://huggingface.co/tensorblock/POLAR-14B-DPO-v1.4-GGUF/blob/main/POLAR-14B-DPO-v1.4-Q3_K_M.gguf) | Q3_K_M | 6.400 GB | very small, high quality loss | | [POLAR-14B-DPO-v1.4-Q3_K_L.gguf](https://huggingface.co/tensorblock/POLAR-14B-DPO-v1.4-GGUF/blob/main/POLAR-14B-DPO-v1.4-Q3_K_L.gguf) | Q3_K_L | 6.966 GB | small, substantial quality loss | | [POLAR-14B-DPO-v1.4-Q4_0.gguf](https://huggingface.co/tensorblock/POLAR-14B-DPO-v1.4-GGUF/blob/main/POLAR-14B-DPO-v1.4-Q4_0.gguf) | Q4_0 | 7.484 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [POLAR-14B-DPO-v1.4-Q4_K_S.gguf](https://huggingface.co/tensorblock/POLAR-14B-DPO-v1.4-GGUF/blob/main/POLAR-14B-DPO-v1.4-Q4_K_S.gguf) | Q4_K_S | 7.541 GB | small, greater quality loss | | [POLAR-14B-DPO-v1.4-Q4_K_M.gguf](https://huggingface.co/tensorblock/POLAR-14B-DPO-v1.4-GGUF/blob/main/POLAR-14B-DPO-v1.4-Q4_K_M.gguf) | Q4_K_M | 7.967 GB | medium, balanced quality - recommended | | [POLAR-14B-DPO-v1.4-Q5_0.gguf](https://huggingface.co/tensorblock/POLAR-14B-DPO-v1.4-GGUF/blob/main/POLAR-14B-DPO-v1.4-Q5_0.gguf) | Q5_0 | 9.124 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [POLAR-14B-DPO-v1.4-Q5_K_S.gguf](https://huggingface.co/tensorblock/POLAR-14B-DPO-v1.4-GGUF/blob/main/POLAR-14B-DPO-v1.4-Q5_K_S.gguf) | Q5_K_S | 9.124 GB | large, low quality loss - recommended | | [POLAR-14B-DPO-v1.4-Q5_K_M.gguf](https://huggingface.co/tensorblock/POLAR-14B-DPO-v1.4-GGUF/blob/main/POLAR-14B-DPO-v1.4-Q5_K_M.gguf) | Q5_K_M | 9.373 GB | large, very low quality loss - recommended | | [POLAR-14B-DPO-v1.4-Q6_K.gguf](https://huggingface.co/tensorblock/POLAR-14B-DPO-v1.4-GGUF/blob/main/POLAR-14B-DPO-v1.4-Q6_K.gguf) | Q6_K | 10.867 GB | very large, extremely low quality loss | | [POLAR-14B-DPO-v1.4-Q8_0.gguf](https://huggingface.co/tensorblock/POLAR-14B-DPO-v1.4-GGUF/blob/main/POLAR-14B-DPO-v1.4-Q8_0.gguf) | Q8_0 | 14.075 GB | very large, extremely low quality loss - not recommended | ## Downloading instruction ### Command line Firstly, install Huggingface Client ```shell pip install -U "huggingface_hub[cli]" ``` Then, downoad the individual model file the a local directory ```shell huggingface-cli download tensorblock/POLAR-14B-DPO-v1.4-GGUF --include "POLAR-14B-DPO-v1.4-Q2_K.gguf" --local-dir MY_LOCAL_DIR ``` If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try: ```shell huggingface-cli download tensorblock/POLAR-14B-DPO-v1.4-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf' ```
Arjun098/tinyllama-colorist-v1
Arjun098
"2024-02-18T07:56:42Z"
0
0
null
[ "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "base_model:finetune:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "license:apache-2.0", "region:us" ]
null
"2024-02-18T07:29:49Z"
--- license: apache-2.0 base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0 tags: - trl - sft - generated_from_trainer model-index: - name: tinyllama-colorist-v1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tinyllama-colorist-v1 This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - training_steps: 10 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.0 - Tokenizers 0.15.2
ericson333/wonderwall_abomination
ericson333
"2025-02-02T12:58:22Z"
49
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
"2025-02-02T12:42:13Z"
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: wonderwall_abomination --- # Wonderwall_Abomination <Gallery /> Trained on Replicate using: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `wonderwall_abomination` to trigger the image generation. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('ericson333/wonderwall_abomination', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
albertus-sussex/veriscrape-simcse-university-wo-ref-gpt-4o-mini
albertus-sussex
"2025-04-02T09:13:09Z"
0
0
transformers
[ "transformers", "safetensors", "roberta", "feature-extraction", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
feature-extraction
"2025-04-02T09:12:43Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
atmikah/Reinforce-CartPole-v1
atmikah
"2024-02-09T00:31:27Z"
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
"2024-02-09T00:31:18Z"
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
TweedleDeepLearnings/245552ee-b0fb-45f5-b634-348ab26c83ab
TweedleDeepLearnings
"2025-01-30T00:45:19Z"
113
0
peft
[ "peft", "safetensors", "axolotl", "generated_from_trainer", "base_model:huggyllama/llama-7b", "base_model:adapter:huggyllama/llama-7b", "license:other", "region:us" ]
null
"2025-01-29T23:40:35Z"
--- library_name: peft license: other base_model: huggyllama/llama-7b tags: - axolotl - generated_from_trainer model-index: - name: c4b201cf-0eeb-4380-a91f-cd6329614a81 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora bf16: auto chat_template: llama3 dataset_prepared_path: null debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 16 gradient_checkpointing: true gradient_clipping: 0.1 group_by_length: false hub_repo: null hub_strategy: end hub_token: null learning_rate: 1.0e-04 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.1 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: linear max_steps: 200 micro_batch_size: 128 mlflow_experiment_name: /tmp/aed51b8e2c089967_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 4096 special_tokens: pad_token: </PAD> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 6a8f76dd-7262-490a-905c-7b83c0f56891 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 6a8f76dd-7262-490a-905c-7b83c0f56891 warmup_steps: 5 weight_decay: 0.1 xformers_attention: true ``` </details><br> ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 128 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 2048 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
IMFDEtienne/smollm2-135M-sft
IMFDEtienne
"2024-12-08T19:04:27Z"
130
0
transformers
[ "transformers", "tensorboard", "safetensors", "llama", "text-generation", "generated_from_trainer", "trl", "sft", "conversational", "base_model:HuggingFaceTB/SmolLM2-135M", "base_model:finetune:HuggingFaceTB/SmolLM2-135M", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-12-06T14:53:28Z"
--- base_model: HuggingFaceTB/SmolLM2-135M library_name: transformers model_name: smollm2-135M-sft tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for smollm2-135M-sft This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M](https://huggingface.co/HuggingFaceTB/SmolLM2-135M). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="IMFDEtienne/smollm2-135M-sft", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/tsst-etienne-imfd/huggingface/runs/3ontt6k5) This model was trained with SFT. ### Framework versions - TRL: 0.12.2 - Transformers: 4.46.3 - Pytorch: 2.5.1 - Datasets: 3.1.0 - Tokenizers: 0.20.3 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
backyardai/Hermes-3-Llama-3.1-8B-GGUF
backyardai
"2024-08-29T16:37:31Z"
239
2
null
[ "gguf", "Llama-3", "instruct", "finetune", "chatml", "gpt4", "synthetic data", "distillation", "function calling", "json mode", "axolotl", "roleplaying", "chat", "en", "base_model:NousResearch/Hermes-3-Llama-3.1-8B", "base_model:quantized:NousResearch/Hermes-3-Llama-3.1-8B", "license:llama3", "endpoints_compatible", "region:us", "conversational" ]
null
"2024-08-29T16:25:47Z"
--- base_model: NousResearch/Hermes-3-Llama-3.1-8B language: - en license: llama3 model_name: Hermes-3-Llama-3.1-8B-GGUF tags: - Llama-3 - instruct - finetune - chatml - gpt4 - synthetic data - distillation - function calling - json mode - axolotl - roleplaying - chat quantized_by: brooketh parameter_count: 8030261312 --- <img src="BackyardAI_Banner.png" alt="Backyard.ai" style="height: 90px; min-width: 32px; display: block; margin: auto;"> **<p style="text-align: center;">The official library of GGUF format models for use in the local AI chat app, Backyard AI.</p>** <p style="text-align: center;"><a href="https://backyard.ai/">Download Backyard AI here to get started.</a></p> <p style="text-align: center;"><a href="https://www.reddit.com/r/LLM_Quants/">Request Additional models at r/LLM_Quants.</a></p> *** # Hermes 3 Llama 3.1 8B - **Creator:** [NousResearch](https://huggingface.co/NousResearch/) - **Original:** [Hermes 3 Llama 3.1 8B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B) - **Date Created:** 2024-07-28 - **Trained Context:** 131072 tokens - **Description:** Hermes 3 is a generalist language model with many improvements over Hermes 2, including advanced agentic capabilities, much better roleplaying, reasoning, multi-turn conversation, long context coherence, and improvements across the board. *** ## What is a GGUF? GGUF is a large language model (LLM) format that can be split between CPU and GPU. GGUFs are compatible with applications based on llama.cpp, such as Backyard AI. Where other model formats require higher end GPUs with ample VRAM, GGUFs can be efficiently run on a wider variety of hardware. GGUF models are quantized to reduce resource usage, with a tradeoff of reduced coherence at lower quantizations. Quantization reduces the precision of the model weights by changing the number of bits used for each weight. *** <img src="BackyardAI_Logo.png" alt="Backyard.ai" style="height: 75px; min-width: 32px; display: block; horizontal align: left;"> ## Backyard AI - Free, local AI chat application. - One-click installation on Mac and PC. - Automatically use GPU for maximum speed. - Built-in model manager. - High-quality character hub. - Zero-config desktop-to-mobile tethering. Backyard AI makes it easy to start chatting with AI using your own characters or one of the many found in the built-in character hub. The model manager helps you find the latest and greatest models without worrying about whether it's the correct format. Backyard AI supports advanced features such as lorebooks, author's note, text formatting, custom context size, sampler settings, grammars, local TTS, cloud inference, and tethering, all implemented in a way that is straightforward and reliable. **Join us on [Discord](https://discord.gg/SyNN2vC9tQ)** ***
mradermacher/Behemoth-123B-v1.1-i1-GGUF
mradermacher
"2024-11-11T22:34:36Z"
12
0
transformers
[ "transformers", "gguf", "en", "base_model:TheDrummer/Behemoth-123B-v1.1", "base_model:quantized:TheDrummer/Behemoth-123B-v1.1", "license:other", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
"2024-11-11T05:54:17Z"
--- base_model: TheDrummer/Behemoth-123B-v1.1 language: - en library_name: transformers license: other quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/TheDrummer/Behemoth-123B-v1.1 <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Behemoth-123B-v1.1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Behemoth-123B-v1.1-i1-GGUF/resolve/main/Behemoth-123B-v1.1.i1-IQ1_S.gguf) | i1-IQ1_S | 26.1 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Behemoth-123B-v1.1-i1-GGUF/resolve/main/Behemoth-123B-v1.1.i1-IQ1_M.gguf) | i1-IQ1_M | 28.5 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Behemoth-123B-v1.1-i1-GGUF/resolve/main/Behemoth-123B-v1.1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 32.5 | | | [GGUF](https://huggingface.co/mradermacher/Behemoth-123B-v1.1-i1-GGUF/resolve/main/Behemoth-123B-v1.1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 36.2 | | | [GGUF](https://huggingface.co/mradermacher/Behemoth-123B-v1.1-i1-GGUF/resolve/main/Behemoth-123B-v1.1.i1-IQ2_S.gguf) | i1-IQ2_S | 38.5 | | | [GGUF](https://huggingface.co/mradermacher/Behemoth-123B-v1.1-i1-GGUF/resolve/main/Behemoth-123B-v1.1.i1-IQ2_M.gguf) | i1-IQ2_M | 41.7 | | | [GGUF](https://huggingface.co/mradermacher/Behemoth-123B-v1.1-i1-GGUF/resolve/main/Behemoth-123B-v1.1.i1-Q2_K.gguf) | i1-Q2_K | 45.3 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Behemoth-123B-v1.1-i1-GGUF/resolve/main/Behemoth-123B-v1.1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 47.1 | lower quality | | [PART 1](https://huggingface.co/mradermacher/Behemoth-123B-v1.1-i1-GGUF/resolve/main/Behemoth-123B-v1.1.i1-IQ3_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Behemoth-123B-v1.1-i1-GGUF/resolve/main/Behemoth-123B-v1.1.i1-IQ3_XS.gguf.part2of2) | i1-IQ3_XS | 50.2 | | | [PART 1](https://huggingface.co/mradermacher/Behemoth-123B-v1.1-i1-GGUF/resolve/main/Behemoth-123B-v1.1.i1-Q3_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Behemoth-123B-v1.1-i1-GGUF/resolve/main/Behemoth-123B-v1.1.i1-Q3_K_S.gguf.part2of2) | i1-Q3_K_S | 52.9 | IQ3_XS probably better | | [PART 1](https://huggingface.co/mradermacher/Behemoth-123B-v1.1-i1-GGUF/resolve/main/Behemoth-123B-v1.1.i1-IQ3_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Behemoth-123B-v1.1-i1-GGUF/resolve/main/Behemoth-123B-v1.1.i1-IQ3_S.gguf.part2of2) | i1-IQ3_S | 53.1 | beats Q3_K* | | [PART 1](https://huggingface.co/mradermacher/Behemoth-123B-v1.1-i1-GGUF/resolve/main/Behemoth-123B-v1.1.i1-IQ3_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Behemoth-123B-v1.1-i1-GGUF/resolve/main/Behemoth-123B-v1.1.i1-IQ3_M.gguf.part2of2) | i1-IQ3_M | 55.4 | | | [PART 1](https://huggingface.co/mradermacher/Behemoth-123B-v1.1-i1-GGUF/resolve/main/Behemoth-123B-v1.1.i1-Q3_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Behemoth-123B-v1.1-i1-GGUF/resolve/main/Behemoth-123B-v1.1.i1-Q3_K_M.gguf.part2of2) | i1-Q3_K_M | 59.2 | IQ3_S probably better | | [PART 1](https://huggingface.co/mradermacher/Behemoth-123B-v1.1-i1-GGUF/resolve/main/Behemoth-123B-v1.1.i1-Q3_K_L.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Behemoth-123B-v1.1-i1-GGUF/resolve/main/Behemoth-123B-v1.1.i1-Q3_K_L.gguf.part2of2) | i1-Q3_K_L | 64.7 | IQ3_M probably better | | [PART 1](https://huggingface.co/mradermacher/Behemoth-123B-v1.1-i1-GGUF/resolve/main/Behemoth-123B-v1.1.i1-IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Behemoth-123B-v1.1-i1-GGUF/resolve/main/Behemoth-123B-v1.1.i1-IQ4_XS.gguf.part2of2) | i1-IQ4_XS | 65.5 | | | [PART 1](https://huggingface.co/mradermacher/Behemoth-123B-v1.1-i1-GGUF/resolve/main/Behemoth-123B-v1.1.i1-Q4_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Behemoth-123B-v1.1-i1-GGUF/resolve/main/Behemoth-123B-v1.1.i1-Q4_0.gguf.part2of2) | i1-Q4_0 | 69.4 | fast, low quality | | [PART 1](https://huggingface.co/mradermacher/Behemoth-123B-v1.1-i1-GGUF/resolve/main/Behemoth-123B-v1.1.i1-Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Behemoth-123B-v1.1-i1-GGUF/resolve/main/Behemoth-123B-v1.1.i1-Q4_K_S.gguf.part2of2) | i1-Q4_K_S | 69.7 | optimal size/speed/quality | | [PART 1](https://huggingface.co/mradermacher/Behemoth-123B-v1.1-i1-GGUF/resolve/main/Behemoth-123B-v1.1.i1-Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Behemoth-123B-v1.1-i1-GGUF/resolve/main/Behemoth-123B-v1.1.i1-Q4_K_M.gguf.part2of2) | i1-Q4_K_M | 73.3 | fast, recommended | | [PART 1](https://huggingface.co/mradermacher/Behemoth-123B-v1.1-i1-GGUF/resolve/main/Behemoth-123B-v1.1.i1-Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Behemoth-123B-v1.1-i1-GGUF/resolve/main/Behemoth-123B-v1.1.i1-Q5_K_S.gguf.part2of2) | i1-Q5_K_S | 84.5 | | | [PART 1](https://huggingface.co/mradermacher/Behemoth-123B-v1.1-i1-GGUF/resolve/main/Behemoth-123B-v1.1.i1-Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Behemoth-123B-v1.1-i1-GGUF/resolve/main/Behemoth-123B-v1.1.i1-Q5_K_M.gguf.part2of2) | i1-Q5_K_M | 86.6 | | | [PART 1](https://huggingface.co/mradermacher/Behemoth-123B-v1.1-i1-GGUF/resolve/main/Behemoth-123B-v1.1.i1-Q6_K.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Behemoth-123B-v1.1-i1-GGUF/resolve/main/Behemoth-123B-v1.1.i1-Q6_K.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Behemoth-123B-v1.1-i1-GGUF/resolve/main/Behemoth-123B-v1.1.i1-Q6_K.gguf.part3of3) | i1-Q6_K | 100.7 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
PhilSad/poca-SoccerTwos
PhilSad
"2023-06-28T08:36:16Z"
2
0
ml-agents
[ "ml-agents", "onnx", "SoccerTwos", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos", "region:us" ]
reinforcement-learning
"2023-06-28T08:10:30Z"
--- library_name: ml-agents tags: - SoccerTwos - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: PhilSad/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
havinash-ai/2f0cc110-c750-499b-8513-027cc6eb9a5a
havinash-ai
"2025-01-24T11:08:17Z"
6
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:heegyu/WizardVicuna2-13b-hf", "base_model:adapter:heegyu/WizardVicuna2-13b-hf", "region:us" ]
null
"2025-01-24T11:03:47Z"
--- library_name: peft base_model: heegyu/WizardVicuna2-13b-hf tags: - axolotl - generated_from_trainer model-index: - name: 2f0cc110-c750-499b-8513-027cc6eb9a5a results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: heegyu/WizardVicuna2-13b-hf bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - feb137b5fcb7d894_train_data.json ds_type: json format: custom path: /workspace/input_data/feb137b5fcb7d894_train_data.json type: field_instruction: error_sentence field_output: correct_sentence format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: havinash-ai/2f0cc110-c750-499b-8513-027cc6eb9a5a hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 10 micro_batch_size: 2 mlflow_experiment_name: /tmp/feb137b5fcb7d894_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 special_tokens: pad_token: </s> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 9434611b-1007-4017-bf85-5d67c2fd764c wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 9434611b-1007-4017-bf85-5d67c2fd764c warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 2f0cc110-c750-499b-8513-027cc6eb9a5a This model is a fine-tuned version of [heegyu/WizardVicuna2-13b-hf](https://huggingface.co/heegyu/WizardVicuna2-13b-hf) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4137 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.5948 | 0.0004 | 1 | 0.6591 | | 0.7963 | 0.0013 | 3 | 0.6562 | | 0.5741 | 0.0026 | 6 | 0.5927 | | 0.413 | 0.0038 | 9 | 0.4137 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
mradermacher/MN-12B-FoxFrame-Shinori-GGUF
mradermacher
"2025-02-27T10:12:53Z"
360
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:DoppelReflEx/MN-12B-FoxFrame-Shinori", "base_model:quantized:DoppelReflEx/MN-12B-FoxFrame-Shinori", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-02-10T09:35:07Z"
--- base_model: DoppelReflEx/MN-12B-FoxFrame-Shinori language: - en library_name: transformers license: cc-by-nc-4.0 quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/DoppelReflEx/MN-12B-FoxFrame-Shinori <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/MN-12B-FoxFrame-Shinori-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/MN-12B-FoxFrame-Shinori-GGUF/resolve/main/MN-12B-FoxFrame-Shinori.Q2_K.gguf) | Q2_K | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/MN-12B-FoxFrame-Shinori-GGUF/resolve/main/MN-12B-FoxFrame-Shinori.Q3_K_S.gguf) | Q3_K_S | 5.6 | | | [GGUF](https://huggingface.co/mradermacher/MN-12B-FoxFrame-Shinori-GGUF/resolve/main/MN-12B-FoxFrame-Shinori.Q3_K_M.gguf) | Q3_K_M | 6.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/MN-12B-FoxFrame-Shinori-GGUF/resolve/main/MN-12B-FoxFrame-Shinori.Q3_K_L.gguf) | Q3_K_L | 6.7 | | | [GGUF](https://huggingface.co/mradermacher/MN-12B-FoxFrame-Shinori-GGUF/resolve/main/MN-12B-FoxFrame-Shinori.IQ4_XS.gguf) | IQ4_XS | 6.9 | | | [GGUF](https://huggingface.co/mradermacher/MN-12B-FoxFrame-Shinori-GGUF/resolve/main/MN-12B-FoxFrame-Shinori.Q4_K_S.gguf) | Q4_K_S | 7.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MN-12B-FoxFrame-Shinori-GGUF/resolve/main/MN-12B-FoxFrame-Shinori.Q4_K_M.gguf) | Q4_K_M | 7.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MN-12B-FoxFrame-Shinori-GGUF/resolve/main/MN-12B-FoxFrame-Shinori.Q5_K_S.gguf) | Q5_K_S | 8.6 | | | [GGUF](https://huggingface.co/mradermacher/MN-12B-FoxFrame-Shinori-GGUF/resolve/main/MN-12B-FoxFrame-Shinori.Q5_K_M.gguf) | Q5_K_M | 8.8 | | | [GGUF](https://huggingface.co/mradermacher/MN-12B-FoxFrame-Shinori-GGUF/resolve/main/MN-12B-FoxFrame-Shinori.Q6_K.gguf) | Q6_K | 10.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/MN-12B-FoxFrame-Shinori-GGUF/resolve/main/MN-12B-FoxFrame-Shinori.Q8_0.gguf) | Q8_0 | 13.1 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
AlignmentResearch/robust_llm_pythia-tt-14m-mz-ada-v3-ch-138000
AlignmentResearch
"2024-03-22T21:09:00Z"
105
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-14m", "base_model:finetune:EleutherAI/pythia-14m", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
"2024-03-22T21:08:53Z"
--- tags: - generated_from_trainer base_model: EleutherAI/pythia-14m model-index: - name: robust_llm_pythia-tt-14m-mz-ada-v3-ch-138000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-tt-14m-mz-ada-v3-ch-138000 This model is a fine-tuned version of [EleutherAI/pythia-14m](https://huggingface.co/EleutherAI/pythia-14m) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.0 - Datasets 2.17.0 - Tokenizers 0.15.2
fpadovani/english_childes_random_30
fpadovani
"2025-01-24T08:12:21Z"
5
0
transformers
[ "transformers", "safetensors", "roberta", "fill-mask", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2025-01-24T07:05:58Z"
--- library_name: transformers tags: - generated_from_trainer model-index: - name: childes_mlm_unmasking_random_30 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # childes_mlm_unmasking_random_30 This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.5758 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 30 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100000 - training_steps: 400000 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-------:|:-----:|:---------------:| | No log | 1.2698 | 2000 | 5.5167 | | 6.2352 | 2.5397 | 4000 | 5.5041 | | 6.2352 | 3.8095 | 6000 | 5.3757 | | 5.1227 | 5.0794 | 8000 | 4.1180 | | 5.1227 | 6.3492 | 10000 | 3.6254 | | 3.7618 | 7.6190 | 12000 | 3.3990 | | 3.7618 | 8.8889 | 14000 | 3.2469 | | 3.3105 | 10.1587 | 16000 | 3.0993 | | 3.3105 | 11.4286 | 18000 | 3.0238 | | 3.0678 | 12.6984 | 20000 | 2.9368 | | 3.0678 | 13.9683 | 22000 | 2.8828 | | 2.9113 | 15.2381 | 24000 | 2.8041 | | 2.9113 | 16.5079 | 26000 | 2.7648 | | 2.794 | 17.7778 | 28000 | 2.7542 | | 2.794 | 19.0476 | 30000 | 2.6823 | | 2.7206 | 20.3175 | 32000 | 2.6820 | | 2.7206 | 21.5873 | 34000 | 2.6692 | | 2.6652 | 22.8571 | 36000 | 2.6250 | | 2.6652 | 24.1270 | 38000 | 2.6326 | | 2.6213 | 25.3968 | 40000 | 2.6201 | | 2.6213 | 26.6667 | 42000 | 2.5680 | | 2.5907 | 27.9365 | 44000 | 2.5839 | | 2.5907 | 29.2063 | 46000 | 2.5799 | | 2.5648 | 30.4762 | 48000 | 2.5726 | | 2.5648 | 31.7460 | 50000 | 2.5995 | | 2.5496 | 33.0159 | 52000 | 2.5492 | | 2.5496 | 34.2857 | 54000 | 2.5562 | | 2.5325 | 35.5556 | 56000 | 2.5572 | | 2.5325 | 36.8254 | 58000 | 2.5758 | ### Framework versions - Transformers 4.45.2 - Pytorch 2.5.1+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
mradermacher/Tiamat-7b-1.1-DPO-GGUF
mradermacher
"2024-06-06T01:33:11Z"
19
0
transformers
[ "transformers", "gguf", "en", "base_model:Gryphe/Tiamat-7b-1.1-DPO", "base_model:quantized:Gryphe/Tiamat-7b-1.1-DPO", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
"2024-06-05T18:47:31Z"
--- base_model: Gryphe/Tiamat-7b-1.1-DPO language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Gryphe/Tiamat-7b-1.1-DPO <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Tiamat-7b-1.1-DPO-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Tiamat-7b-1.1-DPO-GGUF/resolve/main/Tiamat-7b-1.1-DPO.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/Tiamat-7b-1.1-DPO-GGUF/resolve/main/Tiamat-7b-1.1-DPO.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Tiamat-7b-1.1-DPO-GGUF/resolve/main/Tiamat-7b-1.1-DPO.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Tiamat-7b-1.1-DPO-GGUF/resolve/main/Tiamat-7b-1.1-DPO.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Tiamat-7b-1.1-DPO-GGUF/resolve/main/Tiamat-7b-1.1-DPO.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Tiamat-7b-1.1-DPO-GGUF/resolve/main/Tiamat-7b-1.1-DPO.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Tiamat-7b-1.1-DPO-GGUF/resolve/main/Tiamat-7b-1.1-DPO.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Tiamat-7b-1.1-DPO-GGUF/resolve/main/Tiamat-7b-1.1-DPO.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Tiamat-7b-1.1-DPO-GGUF/resolve/main/Tiamat-7b-1.1-DPO.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Tiamat-7b-1.1-DPO-GGUF/resolve/main/Tiamat-7b-1.1-DPO.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Tiamat-7b-1.1-DPO-GGUF/resolve/main/Tiamat-7b-1.1-DPO.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Tiamat-7b-1.1-DPO-GGUF/resolve/main/Tiamat-7b-1.1-DPO.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Tiamat-7b-1.1-DPO-GGUF/resolve/main/Tiamat-7b-1.1-DPO.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Tiamat-7b-1.1-DPO-GGUF/resolve/main/Tiamat-7b-1.1-DPO.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Tiamat-7b-1.1-DPO-GGUF/resolve/main/Tiamat-7b-1.1-DPO.f16.gguf) | f16 | 14.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
KaiNylund/mt5-small-si-10k
KaiNylund
"2024-04-08T05:50:18Z"
107
0
transformers
[ "transformers", "safetensors", "mt5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
"2024-04-08T05:49:40Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
YanweiLi/llama-vid-13b-full-336
YanweiLi
"2023-12-02T09:06:10Z"
22
0
transformers
[ "transformers", "pytorch", "llava", "text-generation", "vision-language model", "llama", "video understanding", "arxiv:2311.17043", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2023-11-29T06:22:23Z"
--- tags: - vision-language model - llama - video understanding --- # LLaMA-VID Model Card <a href='https://llama-vid.github.io/'><img src='https://img.shields.io/badge/Project-Page-Green'></a> <a href='https://arxiv.org/abs/2311.17043'><img src='https://img.shields.io/badge/Paper-Arxiv-red'></a> ## Model details LLaMA-VID empowers existing frameworks to support hour-long videos and pushes their upper limit with an extra context token. **Model type:** LLaMA-VID is an open-source chatbot trained by fine-tuning LLaMA/Vicuna on GPT-generated multimodal instruction-following data. LLaMA-VID empowers existing frameworks to support hour-long videos and pushes their upper limit with an extra context token. We build this repo based on LLaVA. **Model date:** llama-vid-13b-full-336 was trained on 11/2023. ## License Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved. **Where to send questions or comments about the model:** https://github.com/dvlab-research/LLaMA-VID/issues ## Intended use **Primary intended uses:** The primary use of LLaMA-VID is research on large multimodal models and chatbots. **Primary intended users:** The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence. ## Training data This model is trained based on LLaVA-1.5 dataset, including - 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP. - 158K GPT-generated multimodal instruction-following data. - 450K academic-task-oriented VQA data mixture. - 40K ShareGPT data.
vwxyzjn/ppo_zephyr310
vwxyzjn
"2024-05-01T17:17:22Z"
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "mistral", "text-generation", "generated_from_trainer", "conversational", "base_model:HuggingFaceH4/mistral-7b-sft-beta", "base_model:finetune:HuggingFaceH4/mistral-7b-sft-beta", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-05-01T17:16:23Z"
--- license: mit base_model: HuggingFaceH4/mistral-7b-sft-beta tags: - generated_from_trainer model-index: - name: ppo_zephyr310 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ppo_zephyr310 This model is a fine-tuned version of [HuggingFaceH4/mistral-7b-sft-beta](https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-06 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 7 - gradient_accumulation_steps: 32 - total_train_batch_size: 224 - total_eval_batch_size: 56 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.19.1
mteverst/mystv0_agg
mteverst
"2023-10-24T19:35:00Z"
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2023-10-24T09:33:52Z"
--- license: mit tags: - generated_from_trainer model-index: - name: mystv0_agg results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mystv0_agg This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.0722 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - distributed_type: multi-GPU - gradient_accumulation_steps: 8 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.0898 | 3.55 | 1000 | 1.3242 | | 0.6944 | 7.1 | 2000 | 1.4106 | | 0.6876 | 10.64 | 3000 | 1.3813 | | 0.6856 | 14.19 | 4000 | 1.4327 | | 0.685 | 17.74 | 5000 | 1.3641 | | 0.6826 | 21.29 | 6000 | 1.4222 | | 0.6808 | 24.83 | 7000 | 1.3972 | | 0.6811 | 28.38 | 8000 | 1.3969 | | 0.6757 | 31.93 | 9000 | 1.4670 | | 0.6723 | 35.48 | 10000 | 1.4983 | | 0.6668 | 39.02 | 11000 | 1.5150 | | 0.6611 | 42.57 | 12000 | 1.5096 | | 0.6524 | 46.12 | 13000 | 1.5601 | | 0.642 | 49.67 | 14000 | 1.6121 | | 0.6287 | 53.22 | 15000 | 1.6332 | | 0.6129 | 56.76 | 16000 | 1.6489 | | 0.5929 | 60.31 | 17000 | 1.7623 | | 0.5705 | 63.86 | 18000 | 1.7553 | | 0.5455 | 67.41 | 19000 | 1.8321 | | 0.5223 | 70.95 | 20000 | 1.9012 | | 0.498 | 74.5 | 21000 | 1.9379 | | 0.4788 | 78.05 | 22000 | 1.9693 | | 0.461 | 81.6 | 23000 | 2.0177 | | 0.4482 | 85.14 | 24000 | 2.0362 | | 0.4388 | 88.69 | 25000 | 2.0570 | | 0.4327 | 92.24 | 26000 | 2.0703 | | 0.4293 | 95.79 | 27000 | 2.0719 | | 0.4278 | 99.33 | 28000 | 2.0722 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.1.0+cu121 - Datasets 2.13.1 - Tokenizers 0.13.3
lalok/nectar_aihub_model_15000steps
lalok
"2024-07-11T08:40:37Z"
4
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "base_model:openai/whisper-medium", "base_model:finetune:openai/whisper-medium", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2024-07-09T05:28:06Z"
--- license: apache-2.0 base_model: openai/whisper-medium tags: - generated_from_trainer model-index: - name: nectar_aihub_model_15000steps results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # nectar_aihub_model_15000steps This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1385 - Cer: 11.6172 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - training_steps: 15000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Cer | |:-------------:|:------:|:-----:|:---------------:|:-------:| | 0.225 | 0.1307 | 1500 | 0.2025 | 14.2277 | | 0.1842 | 0.2614 | 3000 | 0.1862 | 12.2103 | | 0.1767 | 0.3922 | 4500 | 0.1746 | 11.7624 | | 0.1459 | 0.5229 | 6000 | 0.1656 | 11.3708 | | 0.15 | 0.6536 | 7500 | 0.1577 | 11.1017 | | 0.1415 | 0.7843 | 9000 | 0.1517 | 11.3546 | | 0.1559 | 0.9150 | 10500 | 0.1446 | 10.7843 | | 0.0772 | 1.0458 | 12000 | 0.1430 | 10.9494 | | 0.073 | 1.1765 | 13500 | 0.1405 | 11.8585 | | 0.0737 | 1.3072 | 15000 | 0.1385 | 11.6172 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.2.2+cu121 - Datasets 2.19.2 - Tokenizers 0.19.1
ViditRaj/llama3.2-1B-gguf-ids
ViditRaj
"2025-02-07T06:21:20Z"
21
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2025-02-07T06:21:03Z"
--- base_model: unsloth/llama-3.2-1b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** ViditRaj - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-1b-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
NbAiLabBeta/nb-whisper-medium-semantic
NbAiLabBeta
"2023-12-20T09:48:39Z"
20
1
transformers
[ "transformers", "pytorch", "tf", "jax", "tensorboard", "onnx", "safetensors", "whisper", "automatic-speech-recognition", "audio", "asr", "hf-asr-leaderboard", "no", "nb", "nn", "en", "dataset:NbAiLab/ncc_speech", "dataset:NbAiLab/NST", "dataset:NbAiLab/NPSC", "arxiv:2212.04356", "base_model:openai/whisper-medium", "base_model:quantized:openai/whisper-medium", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2023-12-07T06:14:40Z"
--- license: apache-2.0 language: - 'no' - nb - nn - en datasets: - NbAiLab/ncc_speech - NbAiLab/NST - NbAiLab/NPSC base_model: openai/whisper-medium tags: - audio - asr - automatic-speech-recognition - hf-asr-leaderboard metrics: - wer - cer library_name: transformers pipeline_tag: automatic-speech-recognition widget: - src: https://datasets-server.huggingface.co/assets/google/fleurs/--/nb_no/train/1/audio/audio.mp3 example_title: FLEURS sample 1 - src: https://datasets-server.huggingface.co/assets/google/fleurs/--/nb_no/train/4/audio/audio.mp3 example_title: FLEURS sample 2 --- # Finetuned Semantic model. This model is trained 200 additional steps on top of the main model. The output from this model is less verbatim than when using the main model. The style might be more suited for instance for subtitling of videos since the goal is to use as few words as possible to express the essence of what is said. # NB-Whisper Medium (Release Candidate) **IMPORTANT:** These models are currently Release Candidates. We are in the final stages of testing. If everything proceeds smoothly, we plan to officially release the models later this month. Introducing the **_Norwegian NB-Whisper Medium model_**, proudly developed by the National Library of Norway. NB-Whisper is a cutting-edge series of models designed for automatic speech recognition (ASR) and speech translation. These models are based on the work of [OpenAI's Whisper](https://arxiv.org/abs/2212.04356). Each model in the series has been trained for 250,000 steps, utilizing a diverse dataset of 8 million samples. These samples consist of aligned audio clips, each 30 seconds long, culminating in a staggering 66,000 hours of speech. For an in-depth understanding of our training methodology and dataset composition, keep an eye out for our upcoming article. | Model Size | Parameters | Model | |------------|------------|------------| | Tiny | 39M | [NB-Whisper Tiny](https://huggingface.co/NbAiLabBeta/nb-whisper-tiny) | | Base | 74M | [NB-Whisper Base](https://huggingface.co/NbAiLabBeta/nb-whisper-base) | | Small | 244M | [NB-Whisper Small](https://huggingface.co/NbAiLabBeta/nb-whisper-small) | | Medium | 769M | [NB-Whisper Medium](https://huggingface.co/NbAiLabBeta/nb-whisper-medium) | | Large | 1550M | [NB-Whisper Large](https://huggingface.co/NbAiLabBeta/nb-whisper-large) | ### Specialised Models While the main models are suitable for most transcription task, we demonstrate how easy it is to change the output of the main model. The following models are trained 250 additional steps from the main models above, and might be suitable for more targetted use cases: - **Verbatim version**: This lower-cased variant is more literal and suitable for tasks requiring detailed transcription, such as linguistic analysis. - **Semantic version**: This variant focuses less on verbatim accuracy but captures the essence of content, ideal for meeting minutes and subtitling. | Model Size | Parameters | Verbatim version | Semantic version | |------------|------------|------------|------------------| | Tiny | 39M | [Tiny - verbatim](https://huggingface.co/NbAiLabBeta/nb-whisper-tiny-verbatim) | [Tiny - semantic](https://huggingface.co/NbAiLabBeta/nb-whisper-tiny-semantic) | | Base | 74M | [Base - verbatim](https://huggingface.co/NbAiLabBeta/nb-whisper-base-verbatim) | [Base - semantic](https://huggingface.co/NbAiLabBeta/nb-whisper-base-semantic) | | Small | 244M | [Small - verbatim](https://huggingface.co/NbAiLabBeta/nb-whisper-small-verbatim) | [Small - semantic](https://huggingface.co/NbAiLabBeta/nb-whisper-small-semantic) | | Medium | 769M | [Medium - verbatim](https://huggingface.co/NbAiLabBeta/nb-whisper-medium-verbatim) | [Medium - semantic](https://huggingface.co/NbAiLabBeta/nb-whisper-medium-semantic) | | Large | 1550M | [Large - verbatim](https://huggingface.co/NbAiLabBeta/nb-whisper-large-verbatim) | [Large - semantic](https://huggingface.co/NbAiLabBeta/nb-whisper-large-semantic) | ### Model Description - **Developed by:** [NB AI-Lab](https://ai.nb.no/) - **Shared by:** [NB AI-Lab](https://ai.nb.no/) - **Model type:** `whisper` - **Language(s) (NLP):** Norwegian, Norwegian Bokmål, Norwegian Nynorsk, English - **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) - **Trained from model:** [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) - **Code Repository:** https://github.com/NbAiLab/nb-whisper/ - **Paper:** _Coming soon_ - **Demo:** _See Spaces on this page_ ## How to Use the Models ### Online Demos You can try the models directly through the HuggingFace Inference API, accessible on the right side of this page. Be aware that initially, the model needs to load and will run on limited CPU capacity, which might be slow. To enhance your experience, we are temporarily hosting some models on TPUs for a few days, significantly boosting their performance. Explore these under the **Spaces** section on the [Main Page](https://huggingface.co/NbAiLabBeta/). ### Local Setup with HuggingFace Alternatively, you can run the models locally. The Tiny, Base, and Small models are optimized for CPU execution. For the Medium and Large models, we recommend a system equipped with a GPU to ensure efficient processing. Setting up and using these models with HuggingFace's Transformers is straightforward, provided you have [Python](https://www.python.org/downloads/) installed on your machine. For practical demonstrations, refer to examples using this [sample mp3 file](https://github.com/NbAiLab/nb-whisper/raw/main/audio/king.mp3). ```bash # Download the sample file $ wget -N https://github.com/NbAiLab/nb-whisper/raw/main/audio/king.mp3 # Install necessary libraries. $ pip install transformers>=4.35.2 ``` After this is done, you should be able to run this in Python: ```python from transformers import pipeline # Load the model asr = pipeline("automatic-speech-recognition", "NbAiLabBeta/nb-whisper-medium-semantic") #transcribe asr("king.mp3", generate_kwargs={'task': 'transcribe', 'language': 'no'}) ``` <details> <summary>Expected output</summary> ```json { {'text': ' Nordmenn er nordlendinger, trøndere, sørlendinger og folk fra alle andre regioner. Nordmenn er også innvandret fra Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria. Det er ikke alltid så lett å si hvor vi er fra, hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra.'} } ``` </details> #### Extended HuggingFace Examining the output above, we see that there are multiple repetitions at the end. This is because the video is longer than 30 seconds. By passing the ```chunk_lengt_s``` argument, we can transcribe longer file. Our experience is that we get slightly better result by setting that to 28 seconds instead of the default 30 seconds. We also recommend setting the beam size to 5 if possible. This greatly increases the accuracy but takes a bit longer and requires slightly more memory. The examples below also illustrates how to transcribe to English or Nynorsk, and how to get timestamps for sentences and words. ```python # Long Transcripts asr("king.mp3", chunk_length_s=28, generate_kwargs={'task': 'transcribe', 'language': 'no'}) # Increase accuracy by setting beam size to 5 asr("king.mp3", chunk_length_s=28, return_timestamps=True, generate_kwargs={'num_beams': 5, 'task': 'transcribe', 'language': 'no'}) # Return Timestamps asr("king.mp3", chunk_length_s=28, return_timestamps=True, generate_kwargs={'task': 'transcribe', 'language': 'no'}) # Return Word Level Timestamps asr("king.mp3", chunk_length_s=28, return_timestamps="word", generate_kwargs={'task': 'transcribe', 'language': 'no'}) # Transcribe to Nynorsk asr("king.mp3", chunk_length_s=28, generate_kwargs={'task': 'transcribe', 'language': 'nn'}) # Transcribe to English asr("king.mp3", chunk_length_s=28, generate_kwargs={'task': 'transcribe', 'language': 'en'}) ``` <details> <summary>Expected output</summary> Long transcripts: ```json { {'text': ' Nordmenn er nordlendinger, trøndere, sørlendinger og folk fra alle andre regioner. Nordmenn er også innvandret fra Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria. Det er ikke alltid så lett å si hvor vi er fra, hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra, hvilken nasjonalitet vi tilhører. Det vi kaller hjem, er der hjertet vårt er, og det kan ikke alltid plasseres innenfor landegrenser. Nordmenn er jenter som er glad i jenter, gutter som er glad i gutter, og jenter og gutter som er glad i hverandre. Nordmenn trommer på Gud, Allah, Altet og ingenting. Nordmenn liker Grieg, Kygo, Helbilis og Kari Bremnes. Med andre ord, Norge er dere. Norge er oss. Mitt største håp for Norge er at vi skal klare å ta vare på hverandre, at vi skal bygge dette landet videre på tillit, fellesskap og raushet.'} } ``` Timestamps: ```json { {'text': ' Nordmenn er nordlendinger, trøndere, sørlendinger og folk fra alle andre regioner. Nordmenn er også innvandret fra Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria. Det er ikke alltid så lett å si hvor vi er fra, hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. hvilken nasjonalitet vi tilhører. Det vi kaller hjem, er der hjertet vårt er, og det kan ikke alltid plasseres innenfor landegrenser. Nordmenn er jenter som er glad i jenter, gutter som er glad i gutter, og jenter og gutter som er glad i hverandre. Nordmenn trommer på Gud, Allah, Altet og ingenting. Nordmenn liker Grieg, Kygo, Helbiles og Kari Bremnes. Med andre ord, Norge er dere. Norge er oss. Mitt største håp for Norge er at vi skal klare å ta vare på hverandre, at vi skal bygge dette landet videre på tillit, fellesskap og raushet.', 'chunks': [{'timestamp': (0.0, 5.46), 'text': ' Nordmenn er nordlendinger, trøndere, sørlendinger'}, {'timestamp': (5.52, 8.68), 'text': ' og folk fra alle andre regioner.'}, {'timestamp': (8.68, 16.64), 'text': ' Nordmenn er også innvandret fra Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria.'}, {'timestamp': (16.64, 13.3), 'text': ' Det er ikke alltid så lett å si hvor vi er fra, hvilken nasjonalitet vi er fra.'}, {'timestamp': (13.32, 30.28), 'text': ' Hvilken nasjonalitet vi er fra. hvilken nasjonalitet vi tilhører.'}, {'timestamp': (32.52, 39.16), 'text': ' Det vi kaller hjem, er der hjertet vårt er, og det kan ikke alltid plasseres'}, {'timestamp': (39.16, 42.0), 'text': ' innenfor landegrenser.'}, {'timestamp': (42.0, 46.74), 'text': ' Nordmenn er jenter som er glad i jenter, gutter som er glad i gutter,'}, {'timestamp': (46.74, 51.12), 'text': ' og jenter og gutter som er glad i hverandre.'}, {'timestamp': (51.16, 57.42), 'text': ' Nordmenn trommer på Gud, Allah, Altet og ingenting.'}, {'timestamp': (57.42, 64.3), 'text': ' Nordmenn liker Grieg, Kygo, Helbiles og Kari Bremnes.'}, {'timestamp': (64.34, 71.24), 'text': ' Med andre ord, Norge er dere. Norge er oss.'}, {'timestamp': (71.24, 78.04), 'text': ' Mitt største håp for Norge er at vi skal klare å ta vare på hverandre,'}, {'timestamp': (78.12, 84.68), 'text': ' at vi skal bygge dette landet videre på tillit, fellesskap og raushet.'}]} } ``` Word Level Timestamps: ```json { {"text": "Nordmenn er nordlendinger, trøndere, sørlendinger og folk fra alle andre regioner. Nordmenn er også innvandret fra Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria. Det er ikke alltid så lett å si hvor vi er fra, hvilken nasjonalitet vi tilhører. Det vi kaller hjem, er der hjertet vårt er, og det kan ikke alltid plasseres innenfor landegrenser. Nordmenn er jenter som er glad i jenter, gutter som er glad i gutter, og jenter og gutter som er glad i hverandre. Nordmenn trommer på Gud, Allah, Altet og ingenting. Nordmenn liker Grieg, Kygo, Helbilis og Kari Bremnes. Med andre ord, Norge er dere. Norge er oss. Mitt største håp for Norge er at vi skal klare å ta vare på hverandre, at vi skal bygge dette landet videre på tillit, fellesskap og raushet.", "chunks": [ {"text": "Nordmenn", "timestamp": [0.72, 1.42]}, {"text": "er", "timestamp": [1.42, 1.74]}, // ... more chunks ... {"text": "raushet.", "timestamp": [83.1, 84.88]} ] } } ``` Nynorsk: ```json { {"text": "Nordmenn er nordlendingar, trøndarar, sørlendingar og folk frå alle andre regionar. Nordmenn er også innvandra frå Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria. Det er ikkje alltid så lett å seie kvar vi er frå, kva nasjonalitet vi tilhøyrer. Det vi kallar heim, er der hjartet vårt er, og det kan ikkje alltid plasserast innanfor landegrenser. Nordmenn er jenter som er glad i jenter, gutar som erade i gutar, og jenter og gutar som er glade i kvarandre. Nordmenn trommar på Gud, Allah, Altet og ingenting. Nordmenn liker Grieg, Kygo, Helbiles og Kari Bremnes. Med andre ord, Noreg er dere! Noreg er oss. Mitt største håp for Noreg er at vi skal klare å ta vare på kvarandre, at vi skal byggje dette landet vidare på tillit, fellesskap og raushet."} } ``` English: ```json { {"text": "Norwegians are Norwegians, trønders, southerners and people from all other regions. Norwegians are also invaded from Afghanistan, Pakistan, Poland, Sweden, Somalia and Suria. It is not always so easy to say where we are from, what nationality we belong to. What we call home is where our heart is, and it cannot always be placed within national borders. Norwegians are girls who like girls, boys who like boys, and girls and boys who like each other. Norwegians thrump on God, Allah, Altet and nothing. Norwegians like Grieg, Kygo, Helbilis and Kari Bremnes. In other words, Norway is you. Norway is us. My biggest hope for Norway is that we should be able to take care of each other, that we should build this country on trust, community and generosity."} } ``` </details> ### Whisper CPP Whisper CPP is a C++ implementation of the Whisper model, offering the same functionalities with the added benefits of C++ efficiency and performance optimizations. This allows embedding any Whisper model into a binary file, facilitating the development of real applications. However, it requires some familiarity with compiling C++ programs. Their [homepage](https://github.com/ggerganov/whisper.cpp) provides examples of how to build applications, including real-time transcription. We have converted this model to the ggml-format model used by Whisper CPP binaries. The file can be downloaded [here](blob/main/ggml-model.bin), and a `q5_0` quantized version is also available [here](blob/main/ggml-model-q5_0.bin). ```bash # We can download and compile whisper.cpp $ git clone --depth 1 https://github.com/ggerganov/whisper.cpp --branch v1.5.1 $ cd whisper.cpp/ $ make # We also need to convert the audio to WAV as that is the only format supported by whisper.cpp $ wget -N https://github.com/NbAiLab/nb-whisper/raw/main/audio/king.mp3 $ ffmpeg -i king.mp3 -ar 16000 -ac 1 -c:a pcm_s16le king.wav # Lets download the two ggml-files from this site wget -N https://huggingface.co/NbAiLabBeta/nb-whisper-medium/resolve/main/ggml-model.bin -O models/nb-medium-ggml-model.bin wget -N https://huggingface.co/NbAiLabBeta/nb-whisper-medium/resolve/main/ggml-model-q5_0.bin -O models/nb-medium-ggml-model-q5_0.bin # And run it with the f16 default model $ ./main -l no -m models/nb-medium-ggml-model.bin king.wav # Or the quantized version $ ./main -l no -m models/nb-medium-ggml-model-q5_0.bin king.wav ``` ### WhisperX and Speaker Diarization Speaker diarization is a technique in natural language processing and automatic speech recognition that identifies and separates different speakers in an audio recording. It segments the audio into parts based on who is speaking, enhancing the quality of transcribing meetings or phone calls. We find that [WhisperX](https://github.com/m-bain/whisperX) is the easiest way to use our models for diarizing speech. In addition, WhisperX is using phoneme-based Wav2Vec-models for improving the alignment of the timestamps. As of December 2023 it also has native support for using the nb-wav2vec-models. It currently uses [PyAnnote-audio](https://github.com/pyannote/pyannote-audio) for doing the actual diarization. This package has a fairly strict licence where you have to agree to user terms. Follow the instructions below. ```bash # Follow the install instructions on https://github.com/m-bain/whisperX # Make sure you have a HuggingFace account and have agreed to the pyannote terms # Log in (or supply HF Token in command line) huggingface-cli login # Download a test file wget -N https://github.com/NbAiLab/nb-whisper/raw/main/audio/knuthamsun.mp3 # Optional. If you get complains about not support for Norwegian, do: pip uninstall whisperx && pip install git+https://github.com/m-bain/whisperx.git@8540ff5985fceee764acbed94f656063d7f56540 # Transcribe the test file. All transcripts will end up in the directory of the mp3-file whisperx knuthamsun.mp3 --model NbAiLabBeta/nb-whisper-medium-semantic --language no --diarize ``` You can also run WhisperX from Python. Please take a look at the instructions on [WhisperX homepage](https://github.com/m-bain/whisperX). ### API Instructions for accessing the models via a simple API are included in the demos under Spaces. Note that these demos are temporary and will only be available for a few weeks. ## Training Data The training data originates from Språkbanken and the National Library of Norway's digital collection, including: - NST Norwegian ASR Database (16 kHz) and its corresponding dataset - Transcribed speeches from the Norwegian Parliament by Språkbanken - TV broadcast (NRK) subtitles (NLN digital collection) - Audiobooks (NLN digital collection) ## Downstream Use The models, especially the smaller ones, may exhibit occasional hallucinations and may drop parts of the transcript. They are designed to convert spoken language into grammatically correct written sentences, which might not always be word-for-word translations. We have made two extra model variant for users that want a different transcription style. We encourage users to try the models themselves to get a better understanding. ## Bias, Risks, and Limitations Using these models without adequate risk assessment and mitigation could be considered irresponsible. They may contain biases or other undesirable distortions. Users who deploy these models or integrate them into systems or services are responsible for mitigating risks and complying with applicable AI regulations. The National Library of Norway, as the model owner, disclaims liability for any outcomes resulting from third-party use of these models. ### Software The model was trained using Jax/Flax and converted to PyTorch, Tensorflow, whisper.cpp, and ONXX formats. These are available under `Files and versions`. We welcome requests for conversion to other formats. All training code and scripts are released under the Apache License 2.0 in the GitHub repository [nb-whisper](https://github.com/NbAiLab/nb-whisper/). ## Citation & Contributors The NB-Whisper Medium model is a product of the NoSTram project led by Per Egil Kummervold ([@pere](https://huggingface.co/pere)) at the National Library of Norway. Key contributors include Javier de la Rosa ([@versae](https://huggingface.co/versae)), Freddy Wetjen ([@freddyw](https://huggingface.co/freddyw)), and Rolv-Arild Braaten ([@Rolv-Arild](https://huggingface.co/Rolv-Arild)). NB AI-Lab, under the direction of Svein Arne Brygfjeld ([@Brygfjeld](https://huggingface.co/Brygfjeld)), supported the project's successful completion. A detailed paper on our process and findings is forthcoming. ## Disclaimer The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence. In no event shall the owner of the models (The National Library of Norway) be liable for any results arising from the use made by third parties of these models. ## Acknowledgements Our gratitude extends to [Google TPU Research Cloud](https://sites.research.google/trc/about/) for training resources, Google Cloud for translation credits, and HuggingFace's Sanchit Ghandi for technical support. A special thank you to Per Erik Solberg at Språkbanken for the collaboration on the Stortinget corpus. ## Contact For feedback, technical concerns, or collaboration inquiries, please contact <a rel="noopener nofollow" href="mailto:[email protected]">[email protected]</a>. If you plan to include this model in your research, contact us for the latest information on our upcoming paper for citation purposes.
TheBloke/Luban-13B-GGML
TheBloke
"2023-09-27T13:02:17Z"
5
1
transformers
[ "transformers", "llama", "text-generation", "en", "dataset:Open-Orca/OpenOrca", "license:llama2", "region:us" ]
text-generation
"2023-08-30T15:46:28Z"
--- language: - en license: llama2 datasets: - Open-Orca/OpenOrca model_name: Luban 13B inference: false model_creator: AIDC-ai-business model_link: https://huggingface.co/AIDC-ai-business/Luban-13B model_type: llama pipeline_tag: text-generation quantized_by: TheBloke base_model: AIDC-ai-business/Luban-13B --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Luban 13B - GGML - Model creator: [AIDC-ai-business](https://huggingface.co/AIDC-ai-business) - Original model: [Luban 13B](https://huggingface.co/AIDC-ai-business/Luban-13B) ## Description This repo contains GGML format model files for [AIDC-ai-business's Luban 13B](https://huggingface.co/AIDC-ai-business/Luban-13B). ### Important note regarding GGML files. The GGML format has now been superseded by GGUF. As of August 21st 2023, [llama.cpp](https://github.com/ggerganov/llama.cpp) no longer supports GGML models. Third party clients and libraries are expected to still support it for a time, but many may also drop support. Please use the GGUF models instead. ### About GGML GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as: * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most popular web UI. Supports NVidia CUDA GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a powerful GGML web UI with GPU acceleration on all platforms (CUDA and OpenCL). Especially good for story telling. * [LM Studio](https://lmstudio.ai/), a fully featured local GUI with GPU acceleration on both Windows (NVidia and AMD), and macOS. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with CUDA GPU acceleration via the c_transformers backend. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. ## Repositories available * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Luban-13B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Luban-13B-GGUF) * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference (deprecated)](https://huggingface.co/TheBloke/Luban-13B-GGML) * [AIDC-ai-business's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/AIDC-ai-business/Luban-13B) ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` <!-- compatibility_ggml start --> ## Compatibility These quantised GGML files are compatible with llama.cpp between June 6th (commit `2d43387`) and August 21st 2023. For support with latest llama.cpp, please use GGUF files instead. The final llama.cpp commit with support for GGML was: [dadbed99e65252d79f81101a392d0d6497b86caa](https://github.com/ggerganov/llama.cpp/commit/dadbed99e65252d79f81101a392d0d6497b86caa) As of August 23rd 2023 they are still compatible with all UIs, libraries and utilities which use GGML. This may change in the future. ## Explanation of the new k-quant methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw * GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type. Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_ggml end --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [luban-13b.ggmlv3.Q2_K.bin](https://huggingface.co/TheBloke/Luban-13B-GGML/blob/main/luban-13b.ggmlv3.Q2_K.bin) | Q2_K | 2 | 5.74 GB| 8.24 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. | | [luban-13b.ggmlv3.Q3_K_S.bin](https://huggingface.co/TheBloke/Luban-13B-GGML/blob/main/luban-13b.ggmlv3.Q3_K_S.bin) | Q3_K_S | 3 | 5.87 GB| 8.37 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors | | [luban-13b.ggmlv3.Q3_K_M.bin](https://huggingface.co/TheBloke/Luban-13B-GGML/blob/main/luban-13b.ggmlv3.Q3_K_M.bin) | Q3_K_M | 3 | 6.53 GB| 9.03 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K | | [luban-13b.ggmlv3.Q3_K_L.bin](https://huggingface.co/TheBloke/Luban-13B-GGML/blob/main/luban-13b.ggmlv3.Q3_K_L.bin) | Q3_K_L | 3 | 7.14 GB| 9.64 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K | | [luban-13b.ggmlv3.Q4_0.bin](https://huggingface.co/TheBloke/Luban-13B-GGML/blob/main/luban-13b.ggmlv3.Q4_0.bin) | Q4_0 | 4 | 7.32 GB| 9.82 GB | Original quant method, 4-bit. | | [luban-13b.ggmlv3.Q4_K_S.bin](https://huggingface.co/TheBloke/Luban-13B-GGML/blob/main/luban-13b.ggmlv3.Q4_K_S.bin) | Q4_K_S | 4 | 7.56 GB| 10.06 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors | | [luban-13b.ggmlv3.Q4_K_M.bin](https://huggingface.co/TheBloke/Luban-13B-GGML/blob/main/luban-13b.ggmlv3.Q4_K_M.bin) | Q4_K_M | 4 | 8.06 GB| 10.56 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K | | [luban-13b.ggmlv3.Q4_1.bin](https://huggingface.co/TheBloke/Luban-13B-GGML/blob/main/luban-13b.ggmlv3.Q4_1.bin) | Q4_1 | 4 | 8.14 GB| 10.64 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. | | [luban-13b.ggmlv3.Q5_0.bin](https://huggingface.co/TheBloke/Luban-13B-GGML/blob/main/luban-13b.ggmlv3.Q5_0.bin) | Q5_0 | 5 | 8.95 GB| 11.45 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. | | [luban-13b.ggmlv3.Q5_K_S.bin](https://huggingface.co/TheBloke/Luban-13B-GGML/blob/main/luban-13b.ggmlv3.Q5_K_S.bin) | Q5_K_S | 5 | 9.14 GB| 11.64 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors | | [luban-13b.ggmlv3.Q5_K_M.bin](https://huggingface.co/TheBloke/Luban-13B-GGML/blob/main/luban-13b.ggmlv3.Q5_K_M.bin) | Q5_K_M | 5 | 9.40 GB| 11.90 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K | | [luban-13b.ggmlv3.Q5_1.bin](https://huggingface.co/TheBloke/Luban-13B-GGML/blob/main/luban-13b.ggmlv3.Q5_1.bin) | Q5_1 | 5 | 9.76 GB| 12.26 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. | | [luban-13b.ggmlv3.Q6_K.bin](https://huggingface.co/TheBloke/Luban-13B-GGML/blob/main/luban-13b.ggmlv3.Q6_K.bin) | Q6_K | 6 | 10.83 GB| 13.33 GB | New k-quant method. Uses GGML_TYPE_Q8_K for all tensors - 6-bit quantization | | [luban-13b.ggmlv3.Q8_0.bin](https://huggingface.co/TheBloke/Luban-13B-GGML/blob/main/luban-13b.ggmlv3.Q8_0.bin) | Q8_0 | 8 | 13.83 GB| 16.33 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. ## How to run in `llama.cpp` Make sure you are using `llama.cpp` from commit [dadbed99e65252d79f81101a392d0d6497b86caa](https://github.com/ggerganov/llama.cpp/commit/dadbed99e65252d79f81101a392d0d6497b86caa) or earlier. For compatibility with latest llama.cpp, please use GGUF files instead. ``` ./main -t 10 -ngl 32 -m luban-13b.ggmlv3.q4_K_M.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\nWrite a story about llamas\n\n### Response:" ``` Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`. Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 2048` to the desired sequence length for this model. For example, `-c 4096` for a Llama 2 model. For models that use RoPE, add `--rope-freq-base 10000 --rope-freq-scale 0.5` for doubled context, or `--rope-freq-base 10000 --rope-freq-scale 0.25` for 4x context. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Russ Johnson, J, alfie_i, Alex, NimbleBox.ai, Chadd, Mandus, Nikolai Manek, Ken Nordquist, ya boyyy, Illia Dulskyi, Viktor Bowallius, vamX, Iucharbius, zynix, Magnesian, Clay Pascal, Pierre Kircher, Enrico Ros, Tony Hughes, Elle, Andrey, knownsqashed, Deep Realms, Jerry Meng, Lone Striker, Derek Yates, Pyrater, Mesiah Bishop, James Bentley, Femi Adebogun, Brandon Frisco, SuperWojo, Alps Aficionado, Michael Dempsey, Vitor Caleffi, Will Dee, Edmond Seymore, usrbinkat, LangChain4j, Kacper Wikieł, Luke Pendergrass, John Detwiler, theTransient, Nathan LeClaire, Tiffany J. Kim, biorpg, Eugene Pentland, Stanislav Ovsiannikov, Fred von Graf, terasurfer, Kalila, Dan Guido, Nitin Borwankar, 阿明, Ai Maven, John Villwock, Gabriel Puliatti, Stephen Murray, Asp the Wyvern, danny, Chris Smitley, ReadyPlayerEmma, S_X, Daniel P. Andersen, Olakabola, Jeffrey Morgan, Imad Khwaja, Caitlyn Gatomon, webtim, Alicia Loh, Trenton Dambrowitz, Swaroop Kallakuri, Erik Bjäreholt, Leonard Tan, Spiking Neurons AB, Luke @flexchar, Ajan Kanaga, Thomas Belote, Deo Leter, RoA, Willem Michiel, transmissions 11, subjectnull, Matthew Berman, Joseph William Delisle, David Ziegler, Michael Davis, Johann-Peter Hartmann, Talal Aujan, senxiiz, Artur Olbinski, Rainer Wilmers, Spencer Kim, Fen Risland, Cap'n Zoog, Rishabh Srivastava, Michael Levine, Geoffrey Montalvo, Sean Connelly, Alexandros Triantafyllidis, Pieter, Gabriel Tamborski, Sam, Subspace Studios, Junyu Yang, Pedro Madruga, Vadim, Cory Kujawski, K, Raven Klaugh, Randy H, Mano Prime, Sebastain Graf, Space Cruiser Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: AIDC-ai-business's Luban 13B # Luban-13B Fine-tuned from Llama2-13B,we use Orca-style open source data as well as other open source data for fine-tuning. # Model Details * **Trained by**: trained by AIDC AI-Business. * **Model type:** **Luban-13B** is an auto-regressive language model based on the Lllama 2 transformer architecture. * **Language(s)**: English * **License for Luban-13B base weights**: Non-Commercial Creative Commons license ([CC BY-NC-4.0](https://creativecommons.org/licenses/by-nc/4.0/)) # Prompting ## Prompt Template for alpaca style ``` ### Instruction: <prompt> (without the <>) ### Response: ``` # Evulation Results ([Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)) | Metric | Value | |-----------------------|-------| | Avg. | 65.03 | | ARC (25-shot) | 63.05 | | HellaSwag (10-shot) | 82.8 | | MMLU (5-shot) | 58.73 | | TruthfulQA (0-shot) | 55.53 |
rayTest/Llama-3.2-1B-Instruct-alpaca-a00a622b-d2c0-41e5-afe5-616239bde59b
rayTest
"2025-03-05T17:02:25Z"
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/Llama-3.2-1B-Instruct", "base_model:adapter:unsloth/Llama-3.2-1B-Instruct", "license:llama3.2", "region:us" ]
null
"2025-03-05T17:02:25Z"
--- library_name: peft license: llama3.2 base_model: unsloth/Llama-3.2-1B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: 035b70fb-ba78-45d7-82f1-3f2c87ad8f52 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Llama-3.2-1B-Instruct bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 01ea61fd5779ea07_train_data.json ds_type: json format: custom path: /workspace/input_data/01ea61fd5779ea07_train_data.json type: field_input: input field_instruction: instruction field_output: output format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: false group_by_length: false hub_model_id: samoline/035b70fb-ba78-45d7-82f1-3f2c87ad8f52 hub_repo: samoline hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 4 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 4 lora_target_linear: true lr_scheduler: cosine max_steps: 2 micro_batch_size: 1 mlflow_experiment_name: /tmp/01ea61fd5779ea07_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: samoline-nan wandb_mode: online wandb_name: a00a622b-d2c0-41e5-afe5-616239bde59b wandb_project: Gradients-On-Demand wandb_run: dev wandb_runid: a00a622b-d2c0-41e5-afe5-616239bde59b warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 035b70fb-ba78-45d7-82f1-3f2c87ad8f52 This model is a fine-tuned version of [unsloth/Llama-3.2-1B-Instruct](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.0 | 0.0000 | 1 | nan | | 0.0 | 0.0000 | 2 | nan | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
REDSCARE/RS2281
REDSCARE
"2023-05-18T01:06:09Z"
0
0
adapter-transformers
[ "adapter-transformers", "chemistry", "en", "es", "dataset:togethercomputer/RedPajama-Data-1T", "license:other", "region:us" ]
null
"2023-05-18T01:04:34Z"
--- license: other datasets: - togethercomputer/RedPajama-Data-1T language: - en - es metrics: - accuracy library_name: adapter-transformers tags: - chemistry ---
chribeiro/reinforce-CartPole-v1
chribeiro
"2023-05-04T23:37:49Z"
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
"2023-05-04T22:46:23Z"
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: reinforce-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
flowerpot76/CartPole-v1
flowerpot76
"2023-04-29T09:37:04Z"
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
"2023-04-23T02:57:45Z"
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
enrybds/firstmodel
enrybds
"2022-12-02T08:54:59Z"
43
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2022-12-01T22:09:57Z"
Finetuned model 01/012/2022 · Finetuned jonatasgrosman/wav2vec2-large-xlsr-53-spanish · 150 test audios (120 train, 30 test)
RobertoMCA97/xlm-roberta-base-finetuned-panx-de-fr
RobertoMCA97
"2022-03-16T12:24:41Z"
3
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2022-03-16T12:03:40Z"
--- license: mit tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de-fr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1667 - F1: 0.8582 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2885 | 1.0 | 715 | 0.1817 | 0.8287 | | 0.1497 | 2.0 | 1430 | 0.1618 | 0.8442 | | 0.0944 | 3.0 | 2145 | 0.1667 | 0.8582 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
guptasaurabh78/mistral_b_finance_sg5
guptasaurabh78
"2024-04-01T10:13:40Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-04-01T10:13:13Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
healthylegit/ricogoldgel
healthylegit
"2024-05-04T18:01:28Z"
0
0
null
[ "region:us" ]
null
"2024-01-10T18:46:36Z"
<h1>Rico Gold Gel - Precio y Donde Comprar</h1> Descubre el poder de Rico Gold Gel, tu aliado esencial en la búsqueda de un desempeño sexual mejorado. Este gel innovador, disponible a 49 USD en <b><a href="https://www.ricogoldgel.xyz">https://www.ricogoldgel.xyz</a></b>, está diseñado para hombres que buscan no solo mejorar su experiencia en la intimidad, sino también brindar mayor satisfacción a su pareja. <h3>Características Únicas:</h3> Fórmula Avanzada: Compuesta por ingredientes naturales seleccionados por su eficacia y seguridad, Rico Gold Gel es un producto de primera línea en el mercado de suplementos para el desempeño sexual. Uso Sencillo y Discreto: Su presentación en gel facilita una aplicación directa y cómoda, permitiendo una absorción eficiente y rápida sin dejar residuos. Beneficios Comprobados: Desde el aumento en la duración y firmeza hasta una mayor sensibilidad y placer, los efectos de Rico Gold Gel son palpables desde las primeras aplicaciones. <h3>¿Por Qué Elegir Rico Gold Gel?</h3> Calidad Garantizada: Comprometidos con tu bienestar y satisfacción, Rico Gold Gel cumple con los más altos estándares de calidad. Resultados Visibles: Los testimonios de nuestros usuarios avalan la efectividad de Rico Gold Gel en la mejora del rendimiento sexual. Compra Fácil y Segura: Adquiere tu Rico Gold Gel de manera sencilla y con total confianza a través de nuestra página web https://www.ricogoldgel.xyz. ¡Eleva tu experiencia sexual a un nuevo nivel con Rico Gold Gel! <a href="https://www.ricogoldgel.xyz">Visita nuestra página web</a> y descubre cómo este producto puede transformar tu vida íntima.
fatttty/classification_multi_label_des_crimes_30
fatttty
"2024-10-21T23:48:57Z"
106
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-10-21T23:48:31Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Elkelouizajo/bert_hans_mnli_xlarge
Elkelouizajo
"2024-03-21T16:07:13Z"
6
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-large-cased", "base_model:finetune:google-bert/bert-large-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-03-21T09:09:42Z"
--- license: apache-2.0 base_model: google-bert/bert-large-cased tags: - generated_from_trainer model-index: - name: results_bert_augmented_xlarge results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results_bert_augmented_xlarge This model is a fine-tuned version of [google-bert/bert-large-cased](https://huggingface.co/google-bert/bert-large-cased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 8446 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 11.0 ### Training results ### Framework versions - Transformers 4.39.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
alesthehuman/Reinforce-CartPole-v1
alesthehuman
"2023-07-31T14:14:46Z"
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
"2023-06-14T12:55:33Z"
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
mshojaei77/gemma-3-4b-persian-v0-merged-Q4_K_M-GGUF
mshojaei77
"2025-03-15T00:29:05Z"
0
0
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "base_model:mshojaei77/gemma-3-4b-persian-v0-merged", "base_model:quantized:mshojaei77/gemma-3-4b-persian-v0-merged", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-03-15T00:28:50Z"
--- base_model: mshojaei77/gemma-3-4b-persian-v0-merged library_name: transformers tags: - llama-cpp - gguf-my-repo --- # mshojaei77/gemma-3-4b-persian-v0-merged-Q4_K_M-GGUF This model was converted to GGUF format from [`mshojaei77/gemma-3-4b-persian-v0-merged`](https://huggingface.co/mshojaei77/gemma-3-4b-persian-v0-merged) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/mshojaei77/gemma-3-4b-persian-v0-merged) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo mshojaei77/gemma-3-4b-persian-v0-merged-Q4_K_M-GGUF --hf-file gemma-3-4b-persian-v0-merged-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo mshojaei77/gemma-3-4b-persian-v0-merged-Q4_K_M-GGUF --hf-file gemma-3-4b-persian-v0-merged-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo mshojaei77/gemma-3-4b-persian-v0-merged-Q4_K_M-GGUF --hf-file gemma-3-4b-persian-v0-merged-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo mshojaei77/gemma-3-4b-persian-v0-merged-Q4_K_M-GGUF --hf-file gemma-3-4b-persian-v0-merged-q4_k_m.gguf -c 2048 ```
mariashalu/flight1
mariashalu
"2025-03-21T08:09:20Z"
0
0
null
[ "region:us" ]
null
"2025-03-21T08:05:07Z"
# Flight Fare Prediction - a Classic Time Series Project * Flight fare prediction is a classical problem of time series forecasting that finds trends in past observations to outline the future * Many popular flight booking websites today, including Google Flights, showcase important insights on: * Current fair status: high, low or fair * Past fare trends, upcoming future trends and * Helps decide the right time to book a flight ticket. * In this project, we are going to build a Python Flight Fare Prediction App, that returns the fare prediction for a given set of travel details, like: departure date, arrival date, departure city, arrival city, stoppages, and the airline carrier. ## App Functionality ![](templates/App_Functionality.gif) ## Steps to run Flight Fare App - on Windows * Prerequisites: [Python 3.9](https://www.python.org/downloads/) (ensure Python is added to [PATH](https://medium.com/co-learning-lounge/how-to-download-install-python-on-windows-2021-44a707994013)) + [Git](https://git-scm.com/downloads) Client * Open GIT CMD >> navigate to working directory >> Clone this Github Repo (or download project files from GitHub directly) git clone https://github.com/skillcate/flight-price-prediction.git * Open Windows Powershell >> navigate to new working directory (cloned repo folder) * Run Project in Flask (Using PIP + Virtualenv) pip install virtualenv # install virtual environment virtualenv ENV # create virtual environment by the name ENV .\ENV\Scripts\activate # activate ENV pip install -r .\requirements.txt # install project dependencies python app.py # run the project deactivate # close virtual environment once done ### Steps to run Flight Fare App - on Mac * Prerequisites: [Python 3.9](https://www.python.org/downloads/) * Open Terminal >> navigate to working directory >> Clone this Github Repo (or download project files from GitHub directly) git clone https://github.com/skillcate/flight-price-prediction.git * Navigate to project working directory (cloned repo folder) * Run Project in Flask (Using PIP + Virtualenv) pip install virtualenv # install virtual environment virtualenv ENV # create virtual environment by the name ENV source ENV/bin/activate # activate ENV pip install -r requirements.txt # install project dependencies python app.py # run the project deactivate # close virtual environment once done ### Bug / Feature Request If you find a bug (the website couldn't handle the query and / or gave undesired results), kindly open an issue [here](https://github.com/skillcate/flight-price-prediction/issues) by including your search query and the expected result. ### Important links: * [Dataset](https://www.kaggle.com/datasets/nikhilmittal/flight-fare-prediction-mh) * [Project Folder on Drive](https://drive.google.com/drive/folders/1vnapi048bbmoXyoxOLLX6W_tA6a1uQ9w?usp=sharing)
Liphos/dqn-SpaceInvaders
Liphos
"2022-12-31T20:53:51Z"
7
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
"2022-12-31T20:53:08Z"
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 597.00 +/- 112.19 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Liphos -f logs/ python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Liphos -f logs/ rl_zoo3 enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Liphos ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
m-aliabbas/wav2vec2-base-timit-demo-idrak-paperspace1
m-aliabbas
"2022-11-22T09:36:03Z"
109
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2022-11-22T09:17:56Z"
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-idrak-paperspace1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-idrak-paperspace1 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3623 - Wer: 0.3471 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.1034 | 0.87 | 500 | 0.3623 | 0.3471 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.12.0+cu116 - Datasets 1.18.3 - Tokenizers 0.12.1
RichardErkhov/gotoplanb_-_SmolLM2-FT-MyDataset-gguf
RichardErkhov
"2025-03-02T05:14:17Z"
0
0
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-03-02T05:11:03Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) SmolLM2-FT-MyDataset - GGUF - Model creator: https://huggingface.co/gotoplanb/ - Original model: https://huggingface.co/gotoplanb/SmolLM2-FT-MyDataset/ | Name | Quant method | Size | | ---- | ---- | ---- | | [SmolLM2-FT-MyDataset.Q2_K.gguf](https://huggingface.co/RichardErkhov/gotoplanb_-_SmolLM2-FT-MyDataset-gguf/blob/main/SmolLM2-FT-MyDataset.Q2_K.gguf) | Q2_K | 0.08GB | | [SmolLM2-FT-MyDataset.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/gotoplanb_-_SmolLM2-FT-MyDataset-gguf/blob/main/SmolLM2-FT-MyDataset.IQ3_XS.gguf) | IQ3_XS | 0.08GB | | [SmolLM2-FT-MyDataset.IQ3_S.gguf](https://huggingface.co/RichardErkhov/gotoplanb_-_SmolLM2-FT-MyDataset-gguf/blob/main/SmolLM2-FT-MyDataset.IQ3_S.gguf) | IQ3_S | 0.08GB | | [SmolLM2-FT-MyDataset.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/gotoplanb_-_SmolLM2-FT-MyDataset-gguf/blob/main/SmolLM2-FT-MyDataset.Q3_K_S.gguf) | Q3_K_S | 0.08GB | | [SmolLM2-FT-MyDataset.IQ3_M.gguf](https://huggingface.co/RichardErkhov/gotoplanb_-_SmolLM2-FT-MyDataset-gguf/blob/main/SmolLM2-FT-MyDataset.IQ3_M.gguf) | IQ3_M | 0.08GB | | [SmolLM2-FT-MyDataset.Q3_K.gguf](https://huggingface.co/RichardErkhov/gotoplanb_-_SmolLM2-FT-MyDataset-gguf/blob/main/SmolLM2-FT-MyDataset.Q3_K.gguf) | Q3_K | 0.09GB | | [SmolLM2-FT-MyDataset.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/gotoplanb_-_SmolLM2-FT-MyDataset-gguf/blob/main/SmolLM2-FT-MyDataset.Q3_K_M.gguf) | Q3_K_M | 0.09GB | | [SmolLM2-FT-MyDataset.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/gotoplanb_-_SmolLM2-FT-MyDataset-gguf/blob/main/SmolLM2-FT-MyDataset.Q3_K_L.gguf) | Q3_K_L | 0.09GB | | [SmolLM2-FT-MyDataset.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/gotoplanb_-_SmolLM2-FT-MyDataset-gguf/blob/main/SmolLM2-FT-MyDataset.IQ4_XS.gguf) | IQ4_XS | 0.09GB | | [SmolLM2-FT-MyDataset.Q4_0.gguf](https://huggingface.co/RichardErkhov/gotoplanb_-_SmolLM2-FT-MyDataset-gguf/blob/main/SmolLM2-FT-MyDataset.Q4_0.gguf) | Q4_0 | 0.09GB | | [SmolLM2-FT-MyDataset.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/gotoplanb_-_SmolLM2-FT-MyDataset-gguf/blob/main/SmolLM2-FT-MyDataset.IQ4_NL.gguf) | IQ4_NL | 0.09GB | | [SmolLM2-FT-MyDataset.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/gotoplanb_-_SmolLM2-FT-MyDataset-gguf/blob/main/SmolLM2-FT-MyDataset.Q4_K_S.gguf) | Q4_K_S | 0.1GB | | [SmolLM2-FT-MyDataset.Q4_K.gguf](https://huggingface.co/RichardErkhov/gotoplanb_-_SmolLM2-FT-MyDataset-gguf/blob/main/SmolLM2-FT-MyDataset.Q4_K.gguf) | Q4_K | 0.1GB | | [SmolLM2-FT-MyDataset.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/gotoplanb_-_SmolLM2-FT-MyDataset-gguf/blob/main/SmolLM2-FT-MyDataset.Q4_K_M.gguf) | Q4_K_M | 0.1GB | | [SmolLM2-FT-MyDataset.Q4_1.gguf](https://huggingface.co/RichardErkhov/gotoplanb_-_SmolLM2-FT-MyDataset-gguf/blob/main/SmolLM2-FT-MyDataset.Q4_1.gguf) | Q4_1 | 0.09GB | | [SmolLM2-FT-MyDataset.Q5_0.gguf](https://huggingface.co/RichardErkhov/gotoplanb_-_SmolLM2-FT-MyDataset-gguf/blob/main/SmolLM2-FT-MyDataset.Q5_0.gguf) | Q5_0 | 0.1GB | | [SmolLM2-FT-MyDataset.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/gotoplanb_-_SmolLM2-FT-MyDataset-gguf/blob/main/SmolLM2-FT-MyDataset.Q5_K_S.gguf) | Q5_K_S | 0.1GB | | [SmolLM2-FT-MyDataset.Q5_K.gguf](https://huggingface.co/RichardErkhov/gotoplanb_-_SmolLM2-FT-MyDataset-gguf/blob/main/SmolLM2-FT-MyDataset.Q5_K.gguf) | Q5_K | 0.1GB | | [SmolLM2-FT-MyDataset.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/gotoplanb_-_SmolLM2-FT-MyDataset-gguf/blob/main/SmolLM2-FT-MyDataset.Q5_K_M.gguf) | Q5_K_M | 0.1GB | | [SmolLM2-FT-MyDataset.Q5_1.gguf](https://huggingface.co/RichardErkhov/gotoplanb_-_SmolLM2-FT-MyDataset-gguf/blob/main/SmolLM2-FT-MyDataset.Q5_1.gguf) | Q5_1 | 0.1GB | | [SmolLM2-FT-MyDataset.Q6_K.gguf](https://huggingface.co/RichardErkhov/gotoplanb_-_SmolLM2-FT-MyDataset-gguf/blob/main/SmolLM2-FT-MyDataset.Q6_K.gguf) | Q6_K | 0.13GB | | [SmolLM2-FT-MyDataset.Q8_0.gguf](https://huggingface.co/RichardErkhov/gotoplanb_-_SmolLM2-FT-MyDataset-gguf/blob/main/SmolLM2-FT-MyDataset.Q8_0.gguf) | Q8_0 | 0.13GB | Original model description: --- base_model: HuggingFaceTB/SmolLM2-135M library_name: transformers model_name: SmolLM2-FT-MyDataset tags: - generated_from_trainer - smol-course - module_1 - trl - sft licence: license --- # Model Card for SmolLM2-FT-MyDataset This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M](https://huggingface.co/HuggingFaceTB/SmolLM2-135M). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="gotoplanb/SmolLM2-FT-MyDataset", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.12.1 - Transformers: 4.46.3 - Pytorch: 2.5.1 - Datasets: 3.1.0 - Tokenizers: 0.20.3 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
MayBashendy/ArabicNewSplits4_WithDuplicationsForScore5_FineTuningAraBERT_run1_AugV5_k10_task3_organization
MayBashendy
"2024-12-12T00:36:43Z"
162
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:aubmindlab/bert-base-arabertv02", "base_model:finetune:aubmindlab/bert-base-arabertv02", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-12-12T00:30:31Z"
--- library_name: transformers base_model: aubmindlab/bert-base-arabertv02 tags: - generated_from_trainer model-index: - name: ArabicNewSplits4_WithDuplicationsForScore5_FineTuningAraBERT_run1_AugV5_k10_task3_organization results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ArabicNewSplits4_WithDuplicationsForScore5_FineTuningAraBERT_run1_AugV5_k10_task3_organization This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7247 - Qwk: 0.2464 - Mse: 0.7247 - Rmse: 0.8513 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse | |:-------------:|:------:|:----:|:---------------:|:-------:|:------:|:------:| | No log | 0.0444 | 2 | 3.2300 | -0.0149 | 3.2300 | 1.7972 | | No log | 0.0889 | 4 | 1.6866 | -0.0070 | 1.6866 | 1.2987 | | No log | 0.1333 | 6 | 1.4718 | 0.0255 | 1.4718 | 1.2132 | | No log | 0.1778 | 8 | 0.8949 | 0.1673 | 0.8949 | 0.9460 | | No log | 0.2222 | 10 | 0.5655 | 0.0222 | 0.5655 | 0.7520 | | No log | 0.2667 | 12 | 0.5613 | 0.0569 | 0.5613 | 0.7492 | | No log | 0.3111 | 14 | 0.5551 | 0.0303 | 0.5551 | 0.7451 | | No log | 0.3556 | 16 | 0.5454 | 0.0569 | 0.5454 | 0.7385 | | No log | 0.4 | 18 | 0.5520 | -0.0159 | 0.5520 | 0.7430 | | No log | 0.4444 | 20 | 0.8311 | 0.2000 | 0.8311 | 0.9116 | | No log | 0.4889 | 22 | 0.7803 | 0.2300 | 0.7803 | 0.8834 | | No log | 0.5333 | 24 | 0.6153 | 0.0720 | 0.6153 | 0.7844 | | No log | 0.5778 | 26 | 0.7098 | 0.2000 | 0.7098 | 0.8425 | | No log | 0.6222 | 28 | 0.6655 | 0.1888 | 0.6655 | 0.8158 | | No log | 0.6667 | 30 | 0.6458 | -0.1007 | 0.6458 | 0.8036 | | No log | 0.7111 | 32 | 0.7409 | 0.0345 | 0.7409 | 0.8607 | | No log | 0.7556 | 34 | 0.6909 | -0.0390 | 0.6909 | 0.8312 | | No log | 0.8 | 36 | 0.6797 | -0.0072 | 0.6797 | 0.8244 | | No log | 0.8444 | 38 | 0.6871 | 0.0199 | 0.6871 | 0.8289 | | No log | 0.8889 | 40 | 0.7641 | 0.0311 | 0.7641 | 0.8741 | | No log | 0.9333 | 42 | 0.9004 | -0.0275 | 0.9004 | 0.9489 | | No log | 0.9778 | 44 | 0.7186 | 0.1628 | 0.7186 | 0.8477 | | No log | 1.0222 | 46 | 0.7551 | 0.0737 | 0.7551 | 0.8690 | | No log | 1.0667 | 48 | 0.6982 | 0.1373 | 0.6982 | 0.8356 | | No log | 1.1111 | 50 | 0.7325 | 0.0728 | 0.7325 | 0.8559 | | No log | 1.1556 | 52 | 1.0763 | -0.0154 | 1.0763 | 1.0374 | | No log | 1.2 | 54 | 0.7183 | 0.0769 | 0.7183 | 0.8476 | | No log | 1.2444 | 56 | 0.6764 | 0.1345 | 0.6764 | 0.8224 | | No log | 1.2889 | 58 | 0.7061 | 0.0617 | 0.7061 | 0.8403 | | No log | 1.3333 | 60 | 1.0978 | -0.0277 | 1.0978 | 1.0478 | | No log | 1.3778 | 62 | 1.0735 | 0.0252 | 1.0735 | 1.0361 | | No log | 1.4222 | 64 | 0.8700 | 0.0891 | 0.8700 | 0.9328 | | No log | 1.4667 | 66 | 0.8345 | 0.0891 | 0.8345 | 0.9135 | | No log | 1.5111 | 68 | 0.7870 | 0.0802 | 0.7870 | 0.8871 | | No log | 1.5556 | 70 | 1.0944 | 0.0 | 1.0944 | 1.0461 | | No log | 1.6 | 72 | 1.0873 | 0.0038 | 1.0873 | 1.0427 | | No log | 1.6444 | 74 | 1.0292 | 0.0169 | 1.0292 | 1.0145 | | No log | 1.6889 | 76 | 1.2860 | 0.0270 | 1.2860 | 1.1340 | | No log | 1.7333 | 78 | 0.9786 | 0.0717 | 0.9786 | 0.9892 | | No log | 1.7778 | 80 | 0.7560 | 0.2577 | 0.7560 | 0.8695 | | No log | 1.8222 | 82 | 0.8102 | 0.2153 | 0.8102 | 0.9001 | | No log | 1.8667 | 84 | 1.4464 | 0.0464 | 1.4464 | 1.2027 | | No log | 1.9111 | 86 | 1.4895 | 0.0502 | 1.4895 | 1.2205 | | No log | 1.9556 | 88 | 1.2051 | 0.0262 | 1.2051 | 1.0977 | | No log | 2.0 | 90 | 0.9645 | 0.0308 | 0.9645 | 0.9821 | | No log | 2.0444 | 92 | 0.7751 | 0.2727 | 0.7751 | 0.8804 | | No log | 2.0889 | 94 | 0.8084 | 0.2444 | 0.8084 | 0.8991 | | No log | 2.1333 | 96 | 0.9621 | 0.0539 | 0.9621 | 0.9809 | | No log | 2.1778 | 98 | 1.6207 | 0.1111 | 1.6207 | 1.2731 | | No log | 2.2222 | 100 | 1.3510 | 0.1169 | 1.3510 | 1.1623 | | No log | 2.2667 | 102 | 0.8183 | 0.1538 | 0.8183 | 0.9046 | | No log | 2.3111 | 104 | 0.8754 | 0.2146 | 0.8754 | 0.9356 | | No log | 2.3556 | 106 | 0.9240 | 0.2489 | 0.9240 | 0.9613 | | No log | 2.4 | 108 | 0.9545 | 0.2208 | 0.9545 | 0.9770 | | No log | 2.4444 | 110 | 1.3324 | 0.0789 | 1.3324 | 1.1543 | | No log | 2.4889 | 112 | 0.8852 | 0.1269 | 0.8852 | 0.9408 | | No log | 2.5333 | 114 | 0.7701 | 0.1919 | 0.7701 | 0.8775 | | No log | 2.5778 | 116 | 0.8001 | 0.2315 | 0.8001 | 0.8945 | | No log | 2.6222 | 118 | 0.7511 | 0.2350 | 0.7511 | 0.8666 | | No log | 2.6667 | 120 | 0.8243 | 0.1600 | 0.8243 | 0.9079 | | No log | 2.7111 | 122 | 1.1652 | 0.0312 | 1.1652 | 1.0795 | | No log | 2.7556 | 124 | 1.1735 | 0.0111 | 1.1735 | 1.0833 | | No log | 2.8 | 126 | 0.7277 | 0.2513 | 0.7277 | 0.8530 | | No log | 2.8444 | 128 | 0.7175 | 0.2626 | 0.7175 | 0.8470 | | No log | 2.8889 | 130 | 0.7618 | 0.2000 | 0.7618 | 0.8728 | | No log | 2.9333 | 132 | 0.8249 | 0.1269 | 0.8249 | 0.9082 | | No log | 2.9778 | 134 | 1.0374 | -0.0164 | 1.0374 | 1.0186 | | No log | 3.0222 | 136 | 1.0806 | -0.0164 | 1.0806 | 1.0395 | | No log | 3.0667 | 138 | 0.7984 | 0.1086 | 0.7984 | 0.8935 | | No log | 3.1111 | 140 | 0.8269 | 0.0512 | 0.8269 | 0.9094 | | No log | 3.1556 | 142 | 0.7617 | 0.2889 | 0.7617 | 0.8728 | | No log | 3.2 | 144 | 1.1631 | 0.0840 | 1.1631 | 1.0785 | | No log | 3.2444 | 146 | 1.8137 | 0.0787 | 1.8137 | 1.3468 | | No log | 3.2889 | 148 | 1.3948 | 0.1325 | 1.3948 | 1.1810 | | No log | 3.3333 | 150 | 0.6668 | 0.1910 | 0.6668 | 0.8166 | | No log | 3.3778 | 152 | 0.6601 | 0.2393 | 0.6601 | 0.8125 | | No log | 3.4222 | 154 | 0.6507 | 0.2749 | 0.6507 | 0.8067 | | No log | 3.4667 | 156 | 0.6683 | 0.2258 | 0.6683 | 0.8175 | | No log | 3.5111 | 158 | 0.7365 | 0.1568 | 0.7365 | 0.8582 | | No log | 3.5556 | 160 | 0.7321 | 0.2653 | 0.7321 | 0.8557 | | No log | 3.6 | 162 | 0.7172 | 0.2990 | 0.7172 | 0.8469 | | No log | 3.6444 | 164 | 0.7421 | 0.1628 | 0.7421 | 0.8614 | | No log | 3.6889 | 166 | 0.7474 | 0.2965 | 0.7474 | 0.8645 | | No log | 3.7333 | 168 | 0.9134 | 0.1781 | 0.9134 | 0.9557 | | No log | 3.7778 | 170 | 0.9228 | 0.1786 | 0.9228 | 0.9606 | | No log | 3.8222 | 172 | 0.7121 | 0.2245 | 0.7121 | 0.8439 | | No log | 3.8667 | 174 | 0.8061 | 0.2676 | 0.8061 | 0.8978 | | No log | 3.9111 | 176 | 0.8893 | 0.1336 | 0.8893 | 0.9430 | | No log | 3.9556 | 178 | 0.6875 | 0.2787 | 0.6875 | 0.8291 | | No log | 4.0 | 180 | 0.8988 | 0.1453 | 0.8988 | 0.9481 | | No log | 4.0444 | 182 | 1.2635 | 0.1409 | 1.2635 | 1.1241 | | No log | 4.0889 | 184 | 1.0170 | 0.1008 | 1.0170 | 1.0085 | | No log | 4.1333 | 186 | 0.6653 | 0.2298 | 0.6653 | 0.8157 | | No log | 4.1778 | 188 | 0.7602 | 0.1357 | 0.7602 | 0.8719 | | No log | 4.2222 | 190 | 0.7576 | 0.1841 | 0.7576 | 0.8704 | | No log | 4.2667 | 192 | 0.6715 | 0.3136 | 0.6715 | 0.8194 | | No log | 4.3111 | 194 | 0.8296 | 0.0714 | 0.8296 | 0.9108 | | No log | 4.3556 | 196 | 1.0359 | 0.0949 | 1.0359 | 1.0178 | | No log | 4.4 | 198 | 0.8830 | 0.1392 | 0.8830 | 0.9397 | | No log | 4.4444 | 200 | 0.8187 | 0.1273 | 0.8187 | 0.9048 | | No log | 4.4889 | 202 | 0.7186 | 0.2653 | 0.7186 | 0.8477 | | No log | 4.5333 | 204 | 0.7372 | 0.2323 | 0.7372 | 0.8586 | | No log | 4.5778 | 206 | 0.8521 | 0.1660 | 0.8521 | 0.9231 | | No log | 4.6222 | 208 | 0.8257 | 0.1628 | 0.8257 | 0.9087 | | No log | 4.6667 | 210 | 0.7021 | 0.2340 | 0.7021 | 0.8379 | | No log | 4.7111 | 212 | 0.6850 | 0.3061 | 0.6850 | 0.8276 | | No log | 4.7556 | 214 | 0.6150 | 0.3520 | 0.6150 | 0.7842 | | No log | 4.8 | 216 | 0.6979 | 0.2165 | 0.6979 | 0.8354 | | No log | 4.8444 | 218 | 0.6583 | 0.2487 | 0.6583 | 0.8114 | | No log | 4.8889 | 220 | 0.6039 | 0.2865 | 0.6039 | 0.7771 | | No log | 4.9333 | 222 | 0.5460 | 0.3797 | 0.5460 | 0.7389 | | No log | 4.9778 | 224 | 0.5540 | 0.3913 | 0.5540 | 0.7443 | | No log | 5.0222 | 226 | 0.5711 | 0.4105 | 0.5711 | 0.7557 | | No log | 5.0667 | 228 | 0.6204 | 0.3641 | 0.6204 | 0.7876 | | No log | 5.1111 | 230 | 0.6388 | 0.3641 | 0.6388 | 0.7993 | | No log | 5.1556 | 232 | 0.6900 | 0.2963 | 0.6900 | 0.8306 | | No log | 5.2 | 234 | 0.9107 | 0.2000 | 0.9107 | 0.9543 | | No log | 5.2444 | 236 | 1.0195 | 0.1506 | 1.0195 | 1.0097 | | No log | 5.2889 | 238 | 0.8362 | 0.2348 | 0.8362 | 0.9144 | | No log | 5.3333 | 240 | 0.6980 | 0.3548 | 0.6980 | 0.8355 | | No log | 5.3778 | 242 | 0.7505 | 0.2287 | 0.7505 | 0.8663 | | No log | 5.4222 | 244 | 0.6969 | 0.3469 | 0.6969 | 0.8348 | | No log | 5.4667 | 246 | 0.7223 | 0.1919 | 0.7223 | 0.8499 | | No log | 5.5111 | 248 | 0.6953 | 0.2239 | 0.6953 | 0.8338 | | No log | 5.5556 | 250 | 0.6827 | 0.3927 | 0.6827 | 0.8263 | | No log | 5.6 | 252 | 0.8056 | 0.2356 | 0.8056 | 0.8975 | | No log | 5.6444 | 254 | 0.8449 | 0.2208 | 0.8449 | 0.9192 | | No log | 5.6889 | 256 | 0.7297 | 0.2963 | 0.7297 | 0.8543 | | No log | 5.7333 | 258 | 0.8200 | 0.1628 | 0.8200 | 0.9055 | | No log | 5.7778 | 260 | 0.9761 | 0.1169 | 0.9761 | 0.9880 | | No log | 5.8222 | 262 | 0.8589 | 0.1776 | 0.8589 | 0.9268 | | No log | 5.8667 | 264 | 0.7384 | 0.2233 | 0.7384 | 0.8593 | | No log | 5.9111 | 266 | 0.6952 | 0.2842 | 0.6952 | 0.8338 | | No log | 5.9556 | 268 | 0.6817 | 0.2917 | 0.6817 | 0.8257 | | No log | 6.0 | 270 | 0.6975 | 0.1917 | 0.6975 | 0.8351 | | No log | 6.0444 | 272 | 0.7122 | 0.1753 | 0.7122 | 0.8439 | | No log | 6.0889 | 274 | 0.6602 | 0.2432 | 0.6602 | 0.8125 | | No log | 6.1333 | 276 | 0.6205 | 0.2174 | 0.6205 | 0.7877 | | No log | 6.1778 | 278 | 0.6443 | 0.2340 | 0.6443 | 0.8027 | | No log | 6.2222 | 280 | 0.7023 | 0.1667 | 0.7023 | 0.8380 | | No log | 6.2667 | 282 | 0.6546 | 0.2340 | 0.6546 | 0.8091 | | No log | 6.3111 | 284 | 0.6298 | 0.3407 | 0.6298 | 0.7936 | | No log | 6.3556 | 286 | 0.6547 | 0.2941 | 0.6547 | 0.8091 | | No log | 6.4 | 288 | 0.7235 | 0.2157 | 0.7235 | 0.8506 | | No log | 6.4444 | 290 | 0.7705 | 0.1781 | 0.7705 | 0.8778 | | No log | 6.4889 | 292 | 0.8232 | 0.1549 | 0.8232 | 0.9073 | | No log | 6.5333 | 294 | 0.7880 | 0.2432 | 0.7880 | 0.8877 | | No log | 6.5778 | 296 | 0.7517 | 0.2475 | 0.7517 | 0.8670 | | No log | 6.6222 | 298 | 0.7688 | 0.1481 | 0.7688 | 0.8768 | | No log | 6.6667 | 300 | 0.8379 | 0.1238 | 0.8379 | 0.9154 | | No log | 6.7111 | 302 | 0.7658 | 0.1238 | 0.7658 | 0.8751 | | No log | 6.7556 | 304 | 0.6817 | 0.2258 | 0.6817 | 0.8257 | | No log | 6.8 | 306 | 0.6663 | 0.3191 | 0.6663 | 0.8163 | | No log | 6.8444 | 308 | 0.6710 | 0.2746 | 0.6710 | 0.8191 | | No log | 6.8889 | 310 | 0.6911 | 0.2917 | 0.6911 | 0.8313 | | No log | 6.9333 | 312 | 0.7084 | 0.2917 | 0.7084 | 0.8417 | | No log | 6.9778 | 314 | 0.7125 | 0.2536 | 0.7125 | 0.8441 | | No log | 7.0222 | 316 | 0.7268 | 0.2536 | 0.7268 | 0.8525 | | No log | 7.0667 | 318 | 0.7242 | 0.2917 | 0.7242 | 0.8510 | | No log | 7.1111 | 320 | 0.7237 | 0.2917 | 0.7237 | 0.8507 | | No log | 7.1556 | 322 | 0.7062 | 0.2917 | 0.7062 | 0.8404 | | No log | 7.2 | 324 | 0.6829 | 0.2464 | 0.6829 | 0.8264 | | No log | 7.2444 | 326 | 0.6701 | 0.3299 | 0.6701 | 0.8186 | | No log | 7.2889 | 328 | 0.6596 | 0.3043 | 0.6596 | 0.8121 | | No log | 7.3333 | 330 | 0.6604 | 0.2265 | 0.6604 | 0.8127 | | No log | 7.3778 | 332 | 0.7037 | 0.1556 | 0.7037 | 0.8389 | | No log | 7.4222 | 334 | 0.7394 | 0.2000 | 0.7394 | 0.8599 | | No log | 7.4667 | 336 | 0.7042 | 0.1913 | 0.7042 | 0.8392 | | No log | 7.5111 | 338 | 0.6726 | 0.2727 | 0.6726 | 0.8201 | | No log | 7.5556 | 340 | 0.7022 | 0.2549 | 0.7022 | 0.8380 | | No log | 7.6 | 342 | 0.7115 | 0.2549 | 0.7115 | 0.8435 | | No log | 7.6444 | 344 | 0.6870 | 0.3077 | 0.6870 | 0.8289 | | No log | 7.6889 | 346 | 0.6967 | 0.28 | 0.6967 | 0.8347 | | No log | 7.7333 | 348 | 0.7742 | 0.2300 | 0.7742 | 0.8799 | | No log | 7.7778 | 350 | 0.7984 | 0.1628 | 0.7984 | 0.8935 | | No log | 7.8222 | 352 | 0.7559 | 0.2593 | 0.7559 | 0.8694 | | No log | 7.8667 | 354 | 0.7168 | 0.2563 | 0.7168 | 0.8466 | | No log | 7.9111 | 356 | 0.7116 | 0.2563 | 0.7116 | 0.8436 | | No log | 7.9556 | 358 | 0.7399 | 0.2986 | 0.7399 | 0.8602 | | No log | 8.0 | 360 | 0.7685 | 0.1928 | 0.7685 | 0.8767 | | No log | 8.0444 | 362 | 0.7568 | 0.2676 | 0.7568 | 0.8699 | | No log | 8.0889 | 364 | 0.7390 | 0.2692 | 0.7390 | 0.8597 | | No log | 8.1333 | 366 | 0.6942 | 0.2653 | 0.6942 | 0.8332 | | No log | 8.1778 | 368 | 0.6747 | 0.2549 | 0.6747 | 0.8214 | | No log | 8.2222 | 370 | 0.6960 | 0.3171 | 0.6960 | 0.8343 | | No log | 8.2667 | 372 | 0.7018 | 0.2762 | 0.7018 | 0.8377 | | No log | 8.3111 | 374 | 0.7083 | 0.2390 | 0.7083 | 0.8416 | | No log | 8.3556 | 376 | 0.7517 | 0.2621 | 0.7517 | 0.8670 | | No log | 8.4 | 378 | 0.7746 | 0.2212 | 0.7746 | 0.8801 | | No log | 8.4444 | 380 | 0.7500 | 0.2621 | 0.7500 | 0.8660 | | No log | 8.4889 | 382 | 0.7369 | 0.2637 | 0.7369 | 0.8584 | | No log | 8.5333 | 384 | 0.7130 | 0.2464 | 0.7130 | 0.8444 | | No log | 8.5778 | 386 | 0.7015 | 0.24 | 0.7015 | 0.8376 | | No log | 8.6222 | 388 | 0.6968 | 0.2549 | 0.6968 | 0.8348 | | No log | 8.6667 | 390 | 0.6985 | 0.2487 | 0.6985 | 0.8358 | | No log | 8.7111 | 392 | 0.7214 | 0.2637 | 0.7214 | 0.8493 | | No log | 8.7556 | 394 | 0.7458 | 0.2637 | 0.7458 | 0.8636 | | No log | 8.8 | 396 | 0.7353 | 0.2637 | 0.7353 | 0.8575 | | No log | 8.8444 | 398 | 0.7288 | 0.2323 | 0.7288 | 0.8537 | | No log | 8.8889 | 400 | 0.7059 | 0.2653 | 0.7059 | 0.8402 | | No log | 8.9333 | 402 | 0.6947 | 0.2475 | 0.6947 | 0.8335 | | No log | 8.9778 | 404 | 0.6944 | 0.2475 | 0.6944 | 0.8333 | | No log | 9.0222 | 406 | 0.6909 | 0.2821 | 0.6909 | 0.8312 | | No log | 9.0667 | 408 | 0.6961 | 0.2727 | 0.6961 | 0.8344 | | No log | 9.1111 | 410 | 0.7024 | 0.28 | 0.7024 | 0.8381 | | No log | 9.1556 | 412 | 0.7146 | 0.2475 | 0.7146 | 0.8453 | | No log | 9.2 | 414 | 0.7229 | 0.2453 | 0.7229 | 0.8503 | | No log | 9.2444 | 416 | 0.7357 | 0.2453 | 0.7357 | 0.8578 | | No log | 9.2889 | 418 | 0.7494 | 0.2986 | 0.7494 | 0.8657 | | No log | 9.3333 | 420 | 0.7677 | 0.2593 | 0.7677 | 0.8762 | | No log | 9.3778 | 422 | 0.7626 | 0.2593 | 0.7626 | 0.8733 | | No log | 9.4222 | 424 | 0.7539 | 0.2593 | 0.7539 | 0.8683 | | No log | 9.4667 | 426 | 0.7420 | 0.2637 | 0.7420 | 0.8614 | | No log | 9.5111 | 428 | 0.7383 | 0.2637 | 0.7383 | 0.8592 | | No log | 9.5556 | 430 | 0.7273 | 0.2453 | 0.7273 | 0.8528 | | No log | 9.6 | 432 | 0.7204 | 0.2475 | 0.7204 | 0.8488 | | No log | 9.6444 | 434 | 0.7207 | 0.2475 | 0.7207 | 0.8489 | | No log | 9.6889 | 436 | 0.7212 | 0.2475 | 0.7212 | 0.8492 | | No log | 9.7333 | 438 | 0.7213 | 0.2475 | 0.7213 | 0.8493 | | No log | 9.7778 | 440 | 0.7203 | 0.2475 | 0.7203 | 0.8487 | | No log | 9.8222 | 442 | 0.7211 | 0.2475 | 0.7211 | 0.8492 | | No log | 9.8667 | 444 | 0.7223 | 0.2475 | 0.7223 | 0.8499 | | No log | 9.9111 | 446 | 0.7235 | 0.2464 | 0.7235 | 0.8506 | | No log | 9.9556 | 448 | 0.7245 | 0.2464 | 0.7245 | 0.8512 | | No log | 10.0 | 450 | 0.7247 | 0.2464 | 0.7247 | 0.8513 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.0+cu118 - Datasets 2.21.0 - Tokenizers 0.19.1
cstr/Mistral-7B-base-v0.2-GGUF
cstr
"2024-03-24T12:37:41Z"
3
0
null
[ "gguf", "endpoints_compatible", "region:us" ]
null
"2024-03-24T05:38:41Z"
only a conversion from mistralai hackathon release, for testing
mradermacher/bge-base-en-v1.5-GGUF
mradermacher
"2025-01-02T00:33:37Z"
22
0
transformers
[ "transformers", "gguf", "en", "base_model:yanyan1016/bge-base-en-v1.5", "base_model:quantized:yanyan1016/bge-base-en-v1.5", "endpoints_compatible", "region:us", "feature-extraction" ]
null
"2025-01-02T00:28:48Z"
--- base_model: yanyan1016/bge-base-en-v1.5 language: - en library_name: transformers quantized_by: mradermacher tags: [] --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> static quants of https://huggingface.co/yanyan1016/bge-base-en-v1.5 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/bge-base-en-v1.5-GGUF/resolve/main/bge-base-en-v1.5.Q2_K.gguf) | Q2_K | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/bge-base-en-v1.5-GGUF/resolve/main/bge-base-en-v1.5.Q3_K_S.gguf) | Q3_K_S | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/bge-base-en-v1.5-GGUF/resolve/main/bge-base-en-v1.5.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/bge-base-en-v1.5-GGUF/resolve/main/bge-base-en-v1.5.IQ4_XS.gguf) | IQ4_XS | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/bge-base-en-v1.5-GGUF/resolve/main/bge-base-en-v1.5.Q3_K_L.gguf) | Q3_K_L | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/bge-base-en-v1.5-GGUF/resolve/main/bge-base-en-v1.5.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/bge-base-en-v1.5-GGUF/resolve/main/bge-base-en-v1.5.Q4_K_M.gguf) | Q4_K_M | 0.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/bge-base-en-v1.5-GGUF/resolve/main/bge-base-en-v1.5.Q5_K_S.gguf) | Q5_K_S | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/bge-base-en-v1.5-GGUF/resolve/main/bge-base-en-v1.5.Q5_K_M.gguf) | Q5_K_M | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/bge-base-en-v1.5-GGUF/resolve/main/bge-base-en-v1.5.Q6_K.gguf) | Q6_K | 0.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/bge-base-en-v1.5-GGUF/resolve/main/bge-base-en-v1.5.Q8_0.gguf) | Q8_0 | 0.2 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/bge-base-en-v1.5-GGUF/resolve/main/bge-base-en-v1.5.f16.gguf) | f16 | 0.3 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
visdata/QT1
visdata
"2025-03-20T09:22:05Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-03-20T09:18:24Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
yukioichida/RL_Class_Week1
yukioichida
"2024-06-16T22:11:36Z"
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
"2024-06-16T22:09:48Z"
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 257.16 +/- 23.68 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
ryo-hsgw/xlm-roberta-base-finetuned-panx-fr
ryo-hsgw
"2022-11-05T23:39:34Z"
10
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2022-11-05T23:34:50Z"
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-fr results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.fr metrics: - name: F1 type: f1 value: 0.8325761399966348 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.2978 - F1: 0.8326 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.574 | 1.0 | 191 | 0.3495 | 0.7889 | | 0.2649 | 2.0 | 382 | 0.2994 | 0.8242 | | 0.1716 | 3.0 | 573 | 0.2978 | 0.8326 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0 - Datasets 1.16.1 - Tokenizers 0.10.3
vldnechai/a2c-AntBulletEnv-v0
vldnechai
"2023-04-22T12:10:08Z"
1
0
stable-baselines3
[ "stable-baselines3", "AntBulletEnv-v0", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
"2023-04-22T12:08:59Z"
--- library_name: stable-baselines3 tags: - AntBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: AntBulletEnv-v0 type: AntBulletEnv-v0 metrics: - type: mean_reward value: 2069.78 +/- 24.47 name: mean_reward verified: false --- # **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
guidoivetta/bert-base-spanish-wwm-cased-finetuned-peppa-pig
guidoivetta
"2023-08-31T21:26:50Z"
119
0
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "generated_from_trainer", "base_model:dccuchile/bert-base-spanish-wwm-cased", "base_model:finetune:dccuchile/bert-base-spanish-wwm-cased", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2023-08-31T21:12:45Z"
--- base_model: dccuchile/bert-base-spanish-wwm-cased tags: - generated_from_trainer model-index: - name: bert-base-spanish-wwm-cased-finetuned-peppa-pig results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-spanish-wwm-cased-finetuned-peppa-pig This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.4794 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.6662 | 1.0 | 4 | 1.4686 | | 1.4925 | 2.0 | 8 | 1.6261 | | 1.3818 | 3.0 | 12 | 1.6258 | | 1.4205 | 4.0 | 16 | 1.4041 | | 1.3356 | 5.0 | 20 | 1.5202 | | 1.3471 | 6.0 | 24 | 1.2713 | | 1.304 | 7.0 | 28 | 1.3747 | | 1.2367 | 8.0 | 32 | 1.0912 | | 1.3582 | 9.0 | 36 | 1.3397 | | 1.2332 | 10.0 | 40 | 1.3367 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
goofyai/disney_style_xl
goofyai
"2023-11-22T06:44:53Z"
758
15
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail", "region:us" ]
text-to-image
"2023-11-22T06:40:59Z"
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora widget: - text: disney style,animal focus, animal, cat parameters: negative_prompt: bad quality, deformed, artifacts, digital noise output: url: images/c9ad912d-e9b1-4807-950d-ab2d07eaed6e.png - text: >- disney style,one girl wearing round glasses in school dress, short skirt and socks. white shirt with black necktie parameters: negative_prompt: bad quality, deformed, artifacts, digital noise output: url: images/a2ed97c6-1ab5-431c-a4ae-73cedfb494e4.png - text: >- disney style, brown eyes, white shirt, round eyewear, shirt, earrings, closed mouth, brown hair, jewelry, glasses, looking at viewer, dark skin, 1girl, solo, dark-skinned female, very dark skin, curly hair, lips, portrait, black hair, print shirt, short hair, blurry background, outdoors, yellow-framed eyewear, blurry parameters: negative_prompt: bad quality, deformed, artifacts, digital noise output: url: images/d7c67c24-9116-40da-a75f-bf42a211a6c0.png - text: >- disney style, uniform, rabbit, shirt, vest, day, upper body, hands on hips, rabbit girl, animal nose, smile, furry, police, 1girl, solo, animal ears, rabbit ears, policewoman, grey fur, furry female, long sleeves, purple eyes, blurry background, police uniform, outdoors, blurry, blue shirt parameters: negative_prompt: bad quality, deformed, artifacts, digital noise output: url: images/1d0aac43-aa2a-495c-84fd-ca2c9eb22a0d.jpg - text: >- disney style, rain, furry, bear, 1boy, solo, blue headwear, water drop, baseball cap, outdoors, blurry, shirt, male focus, furry male, hat, blue shirt parameters: negative_prompt: bad quality, deformed, artifacts, digital noise output: url: images/5cd36626-22da-46d2-aa79-2ca31c80fd59.png - text: >- disney style, looking at viewer, long hair, dress, lipstick, braid, hair over shoulder, blonde hair, 1girl, solo, purple dress, makeup, stairs, blue eyes, single braid parameters: negative_prompt: bad quality, deformed, artifacts, digital noise output: url: images/4af61860-6dca-4694-9f31-ceaf08071e6d.png - text: >- disney style, lipstick, dress, smile, braid, tiara, blonde hair, 1girl, solo, upper body, gloves, makeup, crown, blue eyes, cape output: url: images/882eb6c8-5c6c-4694-b3f1-f79f8df8ce8a.jpg base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: disney style license: openrail --- # Disney style xl <Gallery /> ## Trigger words You should use `disney style` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/goofyai/disney_style_xl/tree/main) them in the Files & versions tab.
outlookAi/5BYpWGjD8b
outlookAi
"2025-02-03T17:17:44Z"
10
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
"2025-02-03T16:56:51Z"
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: ajwebmaster --- # 5Bypwgjd8B <Gallery /> Trained on Replicate using: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `ajwebmaster` to trigger the image generation. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('outlookAi/5BYpWGjD8b', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
TobiGeth/tg_user_780322507_lora_1740709792
TobiGeth
"2025-02-28T02:41:26Z"
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
"2025-02-28T02:41:25Z"
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: USER_780322507_1740709792 --- # Tg_User_780322507_Lora_1740709792 <Gallery /> Trained on Replicate using: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `USER_780322507_1740709792` to trigger the image generation. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('TobiGeth/tg_user_780322507_lora_1740709792', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
Smorty100/godot_dodo_4x_60k_llama_7b-Q4_K_M-GGUF
Smorty100
"2025-01-05T15:54:48Z"
99
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "base_model:minosu/godot_dodo_4x_60k_llama_7b", "base_model:quantized:minosu/godot_dodo_4x_60k_llama_7b", "endpoints_compatible", "region:us" ]
null
"2025-01-05T15:54:30Z"
--- base_model: minosu/godot_dodo_4x_60k_llama_7b tags: - llama-cpp - gguf-my-repo --- # Smorty100/godot_dodo_4x_60k_llama_7b-Q4_K_M-GGUF This model was converted to GGUF format from [`minosu/godot_dodo_4x_60k_llama_7b`](https://huggingface.co/minosu/godot_dodo_4x_60k_llama_7b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/minosu/godot_dodo_4x_60k_llama_7b) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Smorty100/godot_dodo_4x_60k_llama_7b-Q4_K_M-GGUF --hf-file godot_dodo_4x_60k_llama_7b-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Smorty100/godot_dodo_4x_60k_llama_7b-Q4_K_M-GGUF --hf-file godot_dodo_4x_60k_llama_7b-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Smorty100/godot_dodo_4x_60k_llama_7b-Q4_K_M-GGUF --hf-file godot_dodo_4x_60k_llama_7b-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Smorty100/godot_dodo_4x_60k_llama_7b-Q4_K_M-GGUF --hf-file godot_dodo_4x_60k_llama_7b-q4_k_m.gguf -c 2048 ```
davidlyng/SmolLM2-FT-MyDataset
davidlyng
"2025-02-22T21:20:29Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "llama", "text-generation", "generated_from_trainer", "smol-course", "module_1", "trl", "sft", "conversational", "base_model:HuggingFaceTB/SmolLM2-135M", "base_model:finetune:HuggingFaceTB/SmolLM2-135M", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-02-22T21:19:49Z"
--- base_model: HuggingFaceTB/SmolLM2-135M library_name: transformers model_name: SmolLM2-FT-MyDataset tags: - generated_from_trainer - smol-course - module_1 - trl - sft licence: license --- # Model Card for SmolLM2-FT-MyDataset This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M](https://huggingface.co/HuggingFaceTB/SmolLM2-135M). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="davidlyng/SmolLM2-FT-MyDataset", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/lyngda-individual/huggingface/runs/ngidle7n) This model was trained with SFT. ### Framework versions - TRL: 0.15.1 - Transformers: 4.48.3 - Pytorch: 2.5.1+cu124 - Datasets: 3.3.2 - Tokenizers: 0.21.0 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
supkon/secondresults
supkon
"2024-11-17T23:34:29Z"
117
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google/flan-t5-base", "base_model:finetune:google/flan-t5-base", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
"2024-11-17T23:33:52Z"
--- library_name: transformers license: apache-2.0 base_model: google/flan-t5-base tags: - generated_from_trainer model-index: - name: secondresults results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # secondresults This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 5.2480 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 7.2741 | 1.0 | 1558 | 5.2480 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
hgnoi/SLNG8o07vUdLq5SH
hgnoi
"2024-05-24T15:02:04Z"
136
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-05-24T15:00:27Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]