Search is not available for this dataset
modelId
stringlengths
5
138
author
stringlengths
2
42
last_modified
unknowndate
2020-02-15 11:33:14
2025-04-15 18:26:17
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
427 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
unknowndate
2022-03-02 23:29:04
2025-04-15 18:25:21
card
stringlengths
11
1.01M
rahulsnkr/ppo-LunarLander-v2
rahulsnkr
"2023-02-13T15:39:57Z"
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
"2023-02-13T15:31:34Z"
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 263.24 +/- 16.09 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
hvein/edgeee_5
hvein
"2024-09-11T13:03:51Z"
29
0
diffusers
[ "diffusers", "safetensors", "stablediffusionapi.com", "stable-diffusion-api", "text-to-image", "ultra-realistic", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
"2024-09-11T13:00:18Z"
--- license: creativeml-openrail-m tags: - stablediffusionapi.com - stable-diffusion-api - text-to-image - ultra-realistic pinned: true --- # NewDream-SDXL 2.0 API Inference ![generated from stablediffusionapi.com](https://pub-3626123a908346a7a8be8d9295f44e26.r2.dev/generations/8478583971702167737.png) ## Get API Key Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed. Replace Key in below code, change **model_id** to "newdream-sdxl-20" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs) Try model for free: [Generate Images](https://stablediffusionapi.com/models/newdream-sdxl-20) Model link: [View model](https://stablediffusionapi.com/models/newdream-sdxl-20) Credits: [View credits](https://civitai.com/?query=NewDream-SDXL%202.0) View all models: [View Models](https://stablediffusionapi.com/models) import requests import json url = "https://stablediffusionapi.com/api/v4/dreambooth" payload = json.dumps({ "key": "your_api_key", "model_id": "newdream-sdxl-20", "prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "embeddings_model_id", "lora": "lora_model_id", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text) > Use this coupon code to get 25% off **DMGG0RBN**
Lots-of-LoRAs/Mistral-7B-Instruct-v0.2-4b-r16-task1319
Lots-of-LoRAs
"2024-07-03T20:31:36Z"
0
0
pytorch
[ "pytorch", "safetensors", "en", "arxiv:1910.09700", "arxiv:2407.00066", "license:mit", "region:us" ]
null
"2024-06-18T20:05:59Z"
--- language: en license: mit library_name: pytorch --- # Model Card for Mistral-7B-Instruct-v0.2-4b-r16-task1319 <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> LoRA trained on task1319_country_by_barcode_prefix - **Developed by:** bruel - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** LoRA - **Language(s) (NLP):** en - **License:** mit - **Finetuned from model [optional]:** mistralai/Mistral-7B-Instruct-v0.2 ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** https://github.com/bruel-gabrielsson - **Paper [optional]:** "Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead" (2024), Rickard Brüel Gabrielsson, Jiacheng Zhu, Onkar Bhardwaj, Leshem Choshen, Kristjan Greenewald, Mikhail Yurochkin and Justin Solomon - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> https://huggingface.co/datasets/Lots-of-LoRAs/task1319_country_by_barcode_prefix sourced from https://github.com/allenai/natural-instructions ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** @misc{brüelgabrielsson2024compressserveservingthousands, title={Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead}, author={Rickard Brüel-Gabrielsson and Jiacheng Zhu and Onkar Bhardwaj and Leshem Choshen and Kristjan Greenewald and Mikhail Yurochkin and Justin Solomon}, year={2024}, eprint={2407.00066}, archivePrefix={arXiv}, primaryClass={cs.DC}, url={https://arxiv.org/abs/2407.00066}, } **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
yam8572/dqn-SpaceInvaders-v5
yam8572
"2023-06-10T23:30:31Z"
1
0
stable-baselines3
[ "stable-baselines3", "ALE/SpaceInvaders-v5", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
"2023-06-10T23:30:07Z"
--- library_name: stable-baselines3 tags: - ALE/SpaceInvaders-v5 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: ALE/SpaceInvaders-v5 type: ALE/SpaceInvaders-v5 metrics: - type: mean_reward value: 576.50 +/- 114.89 name: mean_reward verified: false --- # **DQN** Agent playing **ALE/SpaceInvaders-v5** This is a trained model of a **DQN** agent playing **ALE/SpaceInvaders-v5** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env ALE/SpaceInvaders-v5 -orga yam8572 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env ALE/SpaceInvaders-v5 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env ALE/SpaceInvaders-v5 -orga yam8572 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env ALE/SpaceInvaders-v5 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env ALE/SpaceInvaders-v5 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env ALE/SpaceInvaders-v5 -f logs/ -orga yam8572 ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 10000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
venkatesh-thiru/s2l8h-UNet-6depth-upsample
venkatesh-thiru
"2025-01-27T12:20:00Z"
573
0
transformers
[ "transformers", "pytorch", "safetensors", "s2l8hModel", "feature-extraction", "agriculture", "remote sensing", "earth observation", "landsat", "sentinel-2", "custom_code", "license:mit", "region:us" ]
feature-extraction
"2024-05-18T20:16:47Z"
--- license: mit tags: - agriculture - remote sensing - earth observation - landsat - sentinel-2 --- ## Model Card for UNet-6depth-Up+Conv: `venkatesh-thiru/s2l8h-UNet-6depth-upsample` ### Model Description The UNet-6depth-upsample model is designed to harmonize Landsat-8 and Sentinel-2 satellite imagery by enhancing the spatial resolution of Landsat-8 images. This model takes in Landsat-8 multispectral images (Bottom of the Atmosphere (L2) Reflectances) and pan-chromatic images (Top of the Atmosphere (L1) Reflectances) and outputs images that match the spectral and spatial qualities of Sentinel-2 data. ### Model Architecture This model is a UNet architecture with 6 depth levels and utilizes upsampling combined with convolutional layers to achieve high-fidelity image enhancement. The depth and convolutional layers are fine-tuned to provide a robust transformation that ensures improved spatial resolution and spectral consistency with Sentinel-2 images. ### Usage ```python from transformers import AutoModel # Load the UNet-6depth-Up+Conv model model = AutoModel.from_pretrained("venkatesh-thiru/s2l8h-UNet-6depth-upsample", trust_remote_code=True) # Harmonize Landsat-8 images l8up = model(l8MS, l8pan) ``` Where: `l8MS` - Landsat Multispectral images (L2 Reflectances) `l8pan` - Landsat Pan-Chromatic images (L1 Reflectances) ### Applications Water quality assessment Urban planning Climate monitoring Disaster response Infrastructure oversight Agricultural surveillance ### Limitations While the model generalizes well to most regions of the world, minor limitations may occur in areas with significantly different spectral characteristics or extreme environmental conditions. ### Reference For more details, refer to the publication: 10.1016/j.isprsjprs.2024.04.026
LGAI-EXAONE/EXAONE-Deep-7.8B-AWQ
LGAI-EXAONE
"2025-03-19T07:58:52Z"
404
11
transformers
[ "transformers", "safetensors", "exaone", "text-generation", "lg-ai", "exaone-deep", "conversational", "custom_code", "en", "ko", "arxiv:2503.12524", "base_model:LGAI-EXAONE/EXAONE-Deep-7.8B", "base_model:quantized:LGAI-EXAONE/EXAONE-Deep-7.8B", "license:other", "autotrain_compatible", "4-bit", "awq", "region:us" ]
text-generation
"2025-03-12T04:43:28Z"
--- base_model: LGAI-EXAONE/EXAONE-Deep-7.8B base_model_relation: quantized license: other license_name: exaone license_link: LICENSE language: - en - ko tags: - lg-ai - exaone - exaone-deep pipeline_tag: text-generation library_name: transformers --- <p align="center"> <img src="assets/EXAONE_Symbol+BI_3d.png", width="300", style="margin: 40 auto;"> <br> # EXAONE-Deep-7.8B-AWQ ## Introduction We introduce EXAONE Deep, which exhibits superior capabilities in various reasoning tasks including math and coding benchmarks, ranging from 2.4B to 32B parameters developed and released by LG AI Research. Evaluation results show that 1) EXAONE Deep **2.4B** outperforms other models of comparable size, 2) EXAONE Deep **7.8B** outperforms not only open-weight models of comparable scale but also a proprietary reasoning model OpenAI o1-mini, and 3) EXAONE Deep **32B** demonstrates competitive performance against leading open-weight models. For more details, please refer to our [documentation](https://arxiv.org/abs/2503.12524), [blog](https://www.lgresearch.ai/news/view?seq=543) and [GitHub](https://github.com/LG-AI-EXAONE/EXAONE-Deep). <p align="center"> <img src="assets/exaone_deep_overall_performance.png", width="100%", style="margin: 40 auto;"> This repository contains the AWQ-quantized weights of the reasoning 7.8B language model with the following features: - Number of Parameters (without embeddings): 6.98B - Number of Layers: 32 - Number of Attention Heads: GQA with 32 Q-heads and 8 KV-heads - Vocab Size: 102,400 - Context Length: 32,768 tokens - Quantization: AWQ with 4-bit group-wise weight-only quantization (W4A16g128) ## Quickstart We recommend to use `transformers>=4.43.1` and `autoawq>=0.2.8` Here is the code snippet to run conversational inference with the model: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, TextIteratorStreamer from threading import Thread model_name = "LGAI-EXAONE/EXAONE-Deep-7.8B-AWQ" streaming = True # choose the streaming option model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained(model_name) # Choose your prompt: # Math example (AIME 2024) prompt = r"""Let $x,y$ and $z$ be positive real numbers that satisfy the following system of equations: \[\log_2\left({x \over yz}\right) = {1 \over 2}\]\[\log_2\left({y \over xz}\right) = {1 \over 3}\]\[\log_2\left({z \over xy}\right) = {1 \over 4}\] Then the value of $\left|\log_2(x^4y^3z^2)\right|$ is $\tfrac{m}{n}$ where $m$ and $n$ are relatively prime positive integers. Find $m+n$. Please reason step by step, and put your final answer within \boxed{}.""" # Korean MCQA example (CSAT Math 2025) prompt = r"""Question : $a_1 = 2$인 수열 $\{a_n\}$과 $b_1 = 2$인 등차수열 $\{b_n\}$이 모든 자연수 $n$에 대하여\[\sum_{k=1}^{n} \frac{a_k}{b_{k+1}} = \frac{1}{2} n^2\]을 만족시킬 때, $\sum_{k=1}^{5} a_k$의 값을 구하여라. Options : A) 120 B) 125 C) 130 D) 135 E) 140 Please reason step by step, and you should write the correct option alphabet (A, B, C, D or E) within \\boxed{}.""" messages = [ {"role": "user", "content": prompt} ] input_ids = tokenizer.apply_chat_template( messages, tokenize=True, add_generation_prompt=True, return_tensors="pt" ) if streaming: streamer = TextIteratorStreamer(tokenizer) thread = Thread(target=model.generate, kwargs=dict( input_ids=input_ids.to("cuda"), eos_token_id=tokenizer.eos_token_id, max_new_tokens=32768, do_sample=True, temperature=0.6, top_p=0.95, streamer=streamer )) thread.start() for text in streamer: print(text, end="", flush=True) else: output = model.generate( input_ids.to("cuda"), eos_token_id=tokenizer.eos_token_id, max_new_tokens=32768, do_sample=True, temperature=0.6, top_p=0.95, ) print(tokenizer.decode(output[0])) ``` > ### Note > The EXAONE Deep models are trained with an optimized configuration, > so we recommend following the [Usage Guideline](#usage-guideline) section to achieve optimal performance. ## Evaluation You can check the evaluation results of original EXAONE Deep models at [GitHub](https://github.com/LG-AI-EXAONE/EXAONE-Deep) or our [documentation](https://arxiv.org/abs/2503.12524). ## Deployment EXAONE Deep models can be inferred in the various frameworks, such as: - `TensorRT-LLM` - `vLLM` - `SGLang` - `llama.cpp` - `Ollama` - `LM-Studio` Please refer to our [EXAONE Deep GitHub](https://github.com/LG-AI-EXAONE/EXAONE-Deep) for more details about the inference frameworks. ## Quantization We provide the pre-quantized EXAONE Deep models with **AWQ** and several quantization types in **GGUF** format. Please refer to our [EXAONE Deep collection](https://huggingface.co/collections/LGAI-EXAONE/exaone-deep-67d119918816ec6efa79a4aa) to find corresponding quantized models. ## Usage Guideline To achieve the expected performance, we recommend using the following configurations: 1. Ensure the model starts with `<thought>\n` for reasoning steps. The model's output quality may be degraded when you omit it. You can easily apply this feature by using `tokenizer.apply_chat_template()` with `add_generation_prompt=True`. Please check the example code on [Quickstart](#quickstart) section. 2. The reasoning steps of EXAONE Deep models enclosed by `<thought>\n...\n</thought>` usually have lots of tokens, so previous reasoning steps may be necessary to be removed in multi-turn situation. The provided tokenizer handles this automatically. 3. Avoid using system prompt, and build the instruction on the user prompt. 4. Additional instructions help the models reason more deeply, so that the models generate better output. - For math problems, the instructions **"Please reason step by step, and put your final answer within \boxed{}."** are helpful. - For more information on our evaluation setting including prompts, please refer to our [Documentation](https://arxiv.org/abs/2503.12524). 5. In our evaluation, we use `temperature=0.6` and `top_p=0.95` for generation. 6. When evaluating the models, it is recommended to test multiple times to assess the expected performance accurately. ## Limitation The EXAONE language model has certain limitations and may occasionally generate inappropriate responses. The language model generates responses based on the output probability of tokens, and it is determined during learning from training data. While we have made every effort to exclude personal, harmful, and biased information from the training data, some problematic content may still be included, potentially leading to undesirable responses. Please note that the text generated by EXAONE language model does not reflects the views of LG AI Research. - Inappropriate answers may be generated, which contain personal, harmful or other inappropriate information. - Biased responses may be generated, which are associated with age, gender, race, and so on. - The generated responses rely heavily on statistics from the training data, which can result in the generation of semantically or syntactically incorrect sentences. - Since the model does not reflect the latest information, the responses may be false or contradictory. LG AI Research strives to reduce potential risks that may arise from EXAONE language models. Users are not allowed to engage in any malicious activities (e.g., keying in illegal information) that may induce the creation of inappropriate outputs violating LG AI’s ethical principles when using EXAONE language models. ## License The model is licensed under [EXAONE AI Model License Agreement 1.1 - NC](./LICENSE) ## Citation ``` @article{exaone-deep, title={EXAONE Deep: Reasoning Enhanced Language Models}, author={{LG AI Research}}, journal={arXiv preprint arXiv:2503.12524}, year={2025} } ``` ## Contact LG AI Research Technical Support: [email protected]
speechmaster/omg
speechmaster
"2025-04-14T20:10:23Z"
0
0
null
[ "onnx", "region:us" ]
null
"2025-03-18T16:57:12Z"
<!DOCTYPE html> <html class="" lang="en"> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no" /> <meta name="description" content="We're on a journey to advance and democratize artificial intelligence through open source and open science." /> <meta property="fb:app_id" content="1321688464574422" /> <meta name="twitter:card" content="summary_large_image" /> <meta name="twitter:site" content="@huggingface" /> <meta property="og:title" content="Hugging Face - The AI community building the future." /> <meta property="og:type" content="website" /> <title>Hugging Face - The AI community building the future.</title> <style> body { margin: 0; } main { background-color: white; min-height: 100vh; padding: 7rem 1rem 8rem 1rem; text-align: center; font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system, BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans, sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol, Noto Color Emoji; } img { width: 6rem; height: 6rem; margin: 0 auto 1rem; } h1 { font-size: 3.75rem; line-height: 1; color: rgba(31, 41, 55, 1); font-weight: 700; box-sizing: border-box; margin: 0 auto; } p, a { color: rgba(107, 114, 128, 1); font-size: 1.125rem; line-height: 1.75rem; max-width: 28rem; box-sizing: border-box; margin: 0 auto; } .dark main { background-color: rgb(11, 15, 25); } .dark h1 { color: rgb(209, 213, 219); } .dark p, .dark a { color: rgb(156, 163, 175); } </style> <script> // On page load or when changing themes, best to add inline in `head` to avoid FOUC const key = "_tb_global_settings"; let theme = window.matchMedia("(prefers-color-scheme: dark)").matches ? "dark" : "light"; try { const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme; if (storageTheme) { theme = storageTheme === "dark" ? "dark" : "light"; } } catch (e) {} if (theme === "dark") { document.documentElement.classList.add("dark"); } else { document.documentElement.classList.remove("dark"); } </script> </head> <body> <main> <img src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg" alt="" /> <div> <h1>429</h1> <p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p> </div> </main> </body> </html>
jdorairaj/Bert-uncased-adapter-wnli
jdorairaj
"2024-02-19T00:54:56Z"
0
0
adapter-transformers
[ "adapter-transformers", "bert", "dataset:wnli", "region:us" ]
null
"2024-02-19T00:47:36Z"
--- tags: - adapter-transformers - bert datasets: - wnli --- # Adapter `jdorairaj/Bert-uncased-adapter-wnli` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [wnli](https://huggingface.co/datasets/wnli/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library. ## Usage First, install `adapters`: ``` pip install -U adapters ``` Now, the adapter can be loaded and activated like this: ```python from adapters import AutoAdapterModel model = AutoAdapterModel.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("jdorairaj/Bert-uncased-adapter-wnli", source="hf", set_active=True) ``` ## Architecture & Training <!-- Add some description here --> ## Evaluation results <!-- Add some description here --> ## Citation <!-- Add some description here -->
Lots-of-LoRAs/Mistral-7B-Instruct-v0.2-4b-r16-task1482
Lots-of-LoRAs
"2024-07-03T20:18:24Z"
0
0
pytorch
[ "pytorch", "safetensors", "en", "arxiv:1910.09700", "arxiv:2407.00066", "license:mit", "region:us" ]
null
"2024-06-18T19:48:10Z"
--- language: en license: mit library_name: pytorch --- # Model Card for Mistral-7B-Instruct-v0.2-4b-r16-task1482 <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> LoRA trained on task1482_gene_extraction_chemprot_dataset - **Developed by:** bruel - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** LoRA - **Language(s) (NLP):** en - **License:** mit - **Finetuned from model [optional]:** mistralai/Mistral-7B-Instruct-v0.2 ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** https://github.com/bruel-gabrielsson - **Paper [optional]:** "Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead" (2024), Rickard Brüel Gabrielsson, Jiacheng Zhu, Onkar Bhardwaj, Leshem Choshen, Kristjan Greenewald, Mikhail Yurochkin and Justin Solomon - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> https://huggingface.co/datasets/Lots-of-LoRAs/task1482_gene_extraction_chemprot_dataset sourced from https://github.com/allenai/natural-instructions ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** @misc{brüelgabrielsson2024compressserveservingthousands, title={Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead}, author={Rickard Brüel-Gabrielsson and Jiacheng Zhu and Onkar Bhardwaj and Leshem Choshen and Kristjan Greenewald and Mikhail Yurochkin and Justin Solomon}, year={2024}, eprint={2407.00066}, archivePrefix={arXiv}, primaryClass={cs.DC}, url={https://arxiv.org/abs/2407.00066}, } **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Model-SafeTensors/EVA-Qwen2.5-32B-v0.2
Model-SafeTensors
"2024-11-14T13:19:28Z"
6
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "conversational", "dataset:anthracite-org/kalo-opus-instruct-22k-no-refusal", "dataset:Nopm/Opus_WritingStruct", "dataset:Gryphe/Sonnet3.5-SlimOrcaDedupCleaned", "dataset:Gryphe/Sonnet3.5-Charcard-Roleplay", "dataset:Gryphe/ChatGPT-4o-Writing-Prompts", "dataset:Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned", "dataset:Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned", "dataset:nothingiisreal/Reddit-Dirty-And-WritingPrompts", "dataset:allura-org/Celeste-1.x-data-mixture", "dataset:cognitivecomputations/dolphin-2.9.3", "base_model:Qwen/Qwen2.5-32B", "base_model:finetune:Qwen/Qwen2.5-32B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-11-19T00:32:33Z"
--- library_name: transformers license: apache-2.0 datasets: - anthracite-org/kalo-opus-instruct-22k-no-refusal - Nopm/Opus_WritingStruct - Gryphe/Sonnet3.5-SlimOrcaDedupCleaned - Gryphe/Sonnet3.5-Charcard-Roleplay - Gryphe/ChatGPT-4o-Writing-Prompts - Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned - Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned - nothingiisreal/Reddit-Dirty-And-WritingPrompts - allura-org/Celeste-1.x-data-mixture - cognitivecomputations/dolphin-2.9.3 base_model: Qwen/Qwen2.5-32B tags: - generated_from_trainer model-index: - name: EVA-Qwen2.5-32B-SFFT-v0.1 results: [] --- # EVA Qwen2.5-32B v0.2 <p> A RP/storywriting specialist model, full-parameter finetune of Qwen2.5-32B on mixture of synthetic and natural data.<br> It uses Celeste 70B 0.1 data mixture, greatly expanding it to improve versatility, creativity and "flavor" of the resulting model.<br> </p> <p>Dedicated to Nev.</p> <p><b>Version notes for 0.2</b>: Basically, reprocessed the whole dataset again, due to a severe mistake in previously used pipeline, which left the data poisoned with a lot of non-unicode characters. Now, no more weird generation artifacts, and more stability. Major kudos to Cahvay for his work on fixing this critical issue.</p> <p> <p>Prompt format is ChatML.</p><br> <h3>Recommended sampler values:</h3> <ul> <li>Temperature: 1</li> <li>Min-P: 0.05</li> <li>Top-A: 0.2</li> <li>Repetition Penalty: 1.03</li> </ul> <h3>Recommended SillyTavern presets (via CalamitousFelicitousness):</h3> - [Context](https://huggingface.co/EVA-UNIT-01/EVA-Yi-1.5-9B-32K-V1/blob/main/%5BChatML%5D%20Roleplay-v1.9%20Context.json) - [Instruct and System Prompt](https://huggingface.co/EVA-UNIT-01/EVA-Yi-1.5-9B-32K-V1/blob/main/%5BChatML%5D%20Roleplay-v1.9%20Instruct.json) </p> <p> <br> <h3> Training data: </h3> <ul> <li>Celeste 70B 0.1 data mixture minus Opus Instruct subset. See that model's <a href=https://huggingface.co/nothingiisreal/L3.1-70B-Celeste-V0.1-BF16>card</a> for details.</li> <li>Kalomaze's Opus_Instruct_25k dataset, filtered for refusals.</li> <li>A subset (1k rows) of ChatGPT-4o-WritingPrompts by Gryphe</li> <li>A subset (2k rows) of Sonnet3.5-Charcards-Roleplay by Gryphe</li> <li>Synthstruct and SynthRP datasets by Epiculous</li> <li>A subset from Dolphin-2.9.3, including filtered version of not_samantha and a small subset of systemchat.</li> </ul> <h3> Training time and hardware: </h3> <ul><li>7 hours on 8xH100 SXM, provided by <a href=https://featherless.ai/>FeatherlessAI</a></li></ul><br> </p> <p>Model was created by Kearm, Auri and Cahvay.</p> <h4>Special thanks:</h4><ul> <li><b>to Cahvay for his work on investigating and reprocessing the corrupted dataset, removing the single biggest source of data poisoning.</b></li> <li><b>to <a href=https://featherless.ai/>FeatherlessAI</a> for generously providing 8xH100 SXM node for training of this model</b></li> <li>to Gryphe, Lemmy, Kalomaze, Nopm, Epiculous and CognitiveComputations for the data</li> <li>and to Allura-org for support, feedback, beta-testing and doing quality control of EVA models.</li></ul> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml base_model: Qwen/Qwen2.5-32B load_in_8bit: false load_in_4bit: false strict: false plugins: - axolotl.integrations.liger.LigerPlugin liger_rope: true liger_rms_norm: true liger_swiglu: true liger_fused_linear_cross_entropy: true # plugins: # - axolotl.integrations.spectrum.SpectrumPlugin # spectrum_top_fraction: 0.5 # # Optional if using a pre-scanned model as your base_model. Useful if using a model mirror # spectrum_model_name: Qwen/Qwen2.5-32B datasets: - path: datasets/Celeste_Filtered_utf8fix.jsonl type: sharegpt - path: datasets/deduped_not_samantha_norefusals.jsonl type: sharegpt - path: datasets/deduped_SynthRP-Gens_processed_ShareGPT_converted_cleaned.jsonl type: sharegpt - path: datasets/deduped_Synthstruct-Gens_processed_sharegpt_converted_cleaned.jsonl type: sharegpt - path: datasets/Gryphe-4o-WP-filtered-sharegpt_utf8fix.jsonl type: sharegpt - path: datasets/opus-instruct-22k-no_refusals-filtered_utf8fix.jsonl type: sharegpt - path: datasets/Sonnet3-5-charcard-names-filtered-sharegpt_utf8fix.jsonl type: sharegpt - path: datasets/SystemChat_subset_filtered_sharegpt_utf8fix.jsonl type: sharegpt chat_template: chatml shuffle_merged_datasets: true val_set_size: 0.001 output_dir: ./EVA-Qwen2.5-32B-SFFT-v0.1 sequence_len: 10240 sample_packing: true eval_sample_packing: false pad_to_sequence_len: true # adapter: qlora # lora_model_dir: # lora_r: 64 # lora_alpha: 128 # lora_dropout: 0.05 # lora_target_linear: true # peft_use_dora: true unfrozen_parameters: - ^lm_head.weight$ - ^model.embed_tokens.weight$ # mlp.down_proj layers - model.layers.63.mlp.down_proj - model.layers.49.mlp.down_proj - model.layers.48.mlp.down_proj - model.layers.45.mlp.down_proj - model.layers.44.mlp.down_proj - model.layers.47.mlp.down_proj - model.layers.46.mlp.down_proj - model.layers.43.mlp.down_proj - model.layers.8.mlp.down_proj - model.layers.11.mlp.down_proj - model.layers.19.mlp.down_proj - model.layers.35.mlp.down_proj - model.layers.20.mlp.down_proj - model.layers.52.mlp.down_proj - model.layers.39.mlp.down_proj - model.layers.62.mlp.down_proj - model.layers.50.mlp.down_proj - model.layers.29.mlp.down_proj - model.layers.16.mlp.down_proj - model.layers.28.mlp.down_proj - model.layers.53.mlp.down_proj - model.layers.30.mlp.down_proj - model.layers.31.mlp.down_proj - model.layers.32.mlp.down_proj - model.layers.7.mlp.down_proj - model.layers.36.mlp.down_proj - model.layers.12.mlp.down_proj - model.layers.18.mlp.down_proj - model.layers.37.mlp.down_proj - model.layers.38.mlp.down_proj - model.layers.14.mlp.down_proj - model.layers.13.mlp.down_proj # mlp.gate_proj layers - model.layers.43.mlp.gate_proj - model.layers.61.mlp.gate_proj - model.layers.60.mlp.gate_proj - model.layers.44.mlp.gate_proj - model.layers.62.mlp.gate_proj - model.layers.28.mlp.gate_proj - model.layers.29.mlp.gate_proj - model.layers.45.mlp.gate_proj - model.layers.37.mlp.gate_proj - model.layers.35.mlp.gate_proj - model.layers.59.mlp.gate_proj - model.layers.36.mlp.gate_proj - model.layers.30.mlp.gate_proj - model.layers.48.mlp.gate_proj - model.layers.38.mlp.gate_proj - model.layers.27.mlp.gate_proj - model.layers.31.mlp.gate_proj - model.layers.34.mlp.gate_proj - model.layers.58.mlp.gate_proj - model.layers.33.mlp.gate_proj - model.layers.39.mlp.gate_proj - model.layers.26.mlp.gate_proj - model.layers.32.mlp.gate_proj - model.layers.46.mlp.gate_proj - model.layers.42.mlp.gate_proj - model.layers.49.mlp.gate_proj - model.layers.57.mlp.gate_proj - model.layers.50.mlp.gate_proj - model.layers.47.mlp.gate_proj - model.layers.56.mlp.gate_proj - model.layers.63.mlp.gate_proj - model.layers.55.mlp.gate_proj # mlp.up_proj layers - model.layers.61.mlp.up_proj - model.layers.60.mlp.up_proj - model.layers.32.mlp.up_proj - model.layers.59.mlp.up_proj - model.layers.58.mlp.up_proj - model.layers.57.mlp.up_proj - model.layers.44.mlp.up_proj - model.layers.28.mlp.up_proj - model.layers.35.mlp.up_proj - model.layers.36.mlp.up_proj - model.layers.29.mlp.up_proj - model.layers.31.mlp.up_proj - model.layers.34.mlp.up_proj - model.layers.55.mlp.up_proj - model.layers.49.mlp.up_proj - model.layers.30.mlp.up_proj - model.layers.53.mlp.up_proj - model.layers.43.mlp.up_proj - model.layers.56.mlp.up_proj - model.layers.33.mlp.up_proj - model.layers.54.mlp.up_proj - model.layers.62.mlp.up_proj - model.layers.27.mlp.up_proj - model.layers.51.mlp.up_proj - model.layers.52.mlp.up_proj - model.layers.37.mlp.up_proj - model.layers.45.mlp.up_proj - model.layers.26.mlp.up_proj - model.layers.42.mlp.up_proj - model.layers.50.mlp.up_proj - model.layers.48.mlp.up_proj - model.layers.39.mlp.up_proj # self_attn.k_proj layers - model.layers.63.self_attn.k_proj - model.layers.55.self_attn.k_proj - model.layers.60.self_attn.k_proj - model.layers.7.self_attn.k_proj - model.layers.12.self_attn.k_proj - model.layers.13.self_attn.k_proj - model.layers.57.self_attn.k_proj - model.layers.29.self_attn.k_proj - model.layers.14.self_attn.k_proj - model.layers.51.self_attn.k_proj - model.layers.53.self_attn.k_proj - model.layers.54.self_attn.k_proj - model.layers.22.self_attn.k_proj - model.layers.61.self_attn.k_proj - model.layers.18.self_attn.k_proj - model.layers.30.self_attn.k_proj - model.layers.9.self_attn.k_proj - model.layers.24.self_attn.k_proj - model.layers.23.self_attn.k_proj - model.layers.25.self_attn.k_proj - model.layers.10.self_attn.k_proj - model.layers.58.self_attn.k_proj - model.layers.56.self_attn.k_proj - model.layers.15.self_attn.k_proj - model.layers.32.self_attn.k_proj - model.layers.28.self_attn.k_proj - model.layers.8.self_attn.k_proj - model.layers.59.self_attn.k_proj - model.layers.11.self_attn.k_proj - model.layers.48.self_attn.k_proj - model.layers.16.self_attn.k_proj - model.layers.50.self_attn.k_proj # self_attn.o_proj layers - model.layers.15.self_attn.o_proj - model.layers.23.self_attn.o_proj - model.layers.31.self_attn.o_proj - model.layers.30.self_attn.o_proj - model.layers.18.self_attn.o_proj - model.layers.24.self_attn.o_proj - model.layers.17.self_attn.o_proj - model.layers.28.self_attn.o_proj - model.layers.34.self_attn.o_proj - model.layers.33.self_attn.o_proj - model.layers.25.self_attn.o_proj - model.layers.12.self_attn.o_proj - model.layers.14.self_attn.o_proj - model.layers.29.self_attn.o_proj - model.layers.16.self_attn.o_proj - model.layers.26.self_attn.o_proj - model.layers.22.self_attn.o_proj - model.layers.27.self_attn.o_proj - model.layers.35.self_attn.o_proj - model.layers.20.self_attn.o_proj - model.layers.13.self_attn.o_proj - model.layers.36.self_attn.o_proj - model.layers.19.self_attn.o_proj - model.layers.37.self_attn.o_proj - model.layers.21.self_attn.o_proj - model.layers.11.self_attn.o_proj - model.layers.54.self_attn.o_proj - model.layers.5.self_attn.o_proj - model.layers.38.self_attn.o_proj - model.layers.6.self_attn.o_proj - model.layers.8.self_attn.o_proj - model.layers.9.self_attn.o_proj # self_attn.q_proj layers - model.layers.1.self_attn.q_proj - model.layers.2.self_attn.q_proj - model.layers.3.self_attn.q_proj - model.layers.45.self_attn.q_proj - model.layers.54.self_attn.q_proj - model.layers.35.self_attn.q_proj - model.layers.48.self_attn.q_proj - model.layers.61.self_attn.q_proj - model.layers.52.self_attn.q_proj - model.layers.50.self_attn.q_proj - model.layers.60.self_attn.q_proj - model.layers.56.self_attn.q_proj - model.layers.58.self_attn.q_proj - model.layers.42.self_attn.q_proj - model.layers.59.self_attn.q_proj - model.layers.44.self_attn.q_proj - model.layers.55.self_attn.q_proj - model.layers.57.self_attn.q_proj - model.layers.41.self_attn.q_proj - model.layers.36.self_attn.q_proj - model.layers.39.self_attn.q_proj - model.layers.4.self_attn.q_proj - model.layers.43.self_attn.q_proj - model.layers.34.self_attn.q_proj - model.layers.46.self_attn.q_proj - model.layers.49.self_attn.q_proj - model.layers.40.self_attn.q_proj - model.layers.25.self_attn.q_proj - model.layers.51.self_attn.q_proj - model.layers.17.self_attn.q_proj - model.layers.37.self_attn.q_proj - model.layers.53.self_attn.q_proj # self_attn.v_proj layers - model.layers.55.self_attn.v_proj - model.layers.31.self_attn.v_proj - model.layers.47.self_attn.v_proj - model.layers.45.self_attn.v_proj - model.layers.49.self_attn.v_proj - model.layers.48.self_attn.v_proj - model.layers.15.self_attn.v_proj - model.layers.30.self_attn.v_proj - model.layers.7.self_attn.v_proj - model.layers.44.self_attn.v_proj - model.layers.29.self_attn.v_proj - model.layers.51.self_attn.v_proj - model.layers.50.self_attn.v_proj - model.layers.14.self_attn.v_proj - model.layers.54.self_attn.v_proj - model.layers.32.self_attn.v_proj - model.layers.43.self_attn.v_proj - model.layers.10.self_attn.v_proj - model.layers.46.self_attn.v_proj - model.layers.38.self_attn.v_proj - model.layers.57.self_attn.v_proj - model.layers.22.self_attn.v_proj - model.layers.39.self_attn.v_proj - model.layers.6.self_attn.v_proj - model.layers.23.self_attn.v_proj - model.layers.58.self_attn.v_proj - model.layers.53.self_attn.v_proj - model.layers.40.self_attn.v_proj - model.layers.24.self_attn.v_proj - model.layers.9.self_attn.v_proj - model.layers.25.self_attn.v_proj - model.layers.5.self_attn.v_proj wandb_project: EVA-Qwen2.5-32B-SFFT-v0.2 wandb_entity: wandb_watch: wandb_name: Unit-02 wandb_log_model: gradient_accumulation_steps: 8 micro_batch_size: 1 num_epochs: 3 optimizer: paged_adamw_8bit lr_scheduler: cosine learning_rate: 0.00005 max_grad_norm: 3 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: "unsloth" # gradient_checkpointing_kwargs: # use_reentrant: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 20 evals_per_epoch: 4 saves_per_epoch: 4 save_safetensors: true hub_model_id: hub_strategy: debug: deepspeed: deepspeed_configs/zero3_bf16.json weight_decay: 0.1 # fsdp: # - full_shard # - auto_wrap # fsdp_config: # fsdp_limit_all_gathers: true # fsdp_sync_module_states: false # fsdp_offload_params: true # fsdp_cpu_ram_efficient_loading: true # fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP # fsdp_transformer_layer_cls_to_wrap: Qwen2DecoderLayer # fsdp_activation_checkpointing: true # fsdp_state_dict_type: SHARDED_STATE_DICT # Changed from FULL_STATE_DICT # fsdp_sharding_strategy: FULL_SHARD # fsdp_forward_prefetch: false # Added # fsdp_backward_prefetch: "BACKWARD_PRE" # Added # fsdp_backward_prefetch_limit: 1 # Added # fsdp_mixed_precision: BF16 # Added ``` </details><br>
sniperfix/d45b4c88-f354-4dab-84a9-9645a165ef5a
sniperfix
"2025-04-14T01:58:27Z"
0
0
null
[ "region:us" ]
null
"2025-04-14T01:58:12Z"
<!DOCTYPE html> <html class="" lang="en"> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no" /> <meta name="description" content="We're on a journey to advance and democratize artificial intelligence through open source and open science." /> <meta property="fb:app_id" content="1321688464574422" /> <meta name="twitter:card" content="summary_large_image" /> <meta name="twitter:site" content="@huggingface" /> <meta property="og:title" content="Hugging Face - The AI community building the future." /> <meta property="og:type" content="website" /> <title>Hugging Face - The AI community building the future.</title> <style> body { margin: 0; } main { background-color: white; min-height: 100vh; padding: 7rem 1rem 8rem 1rem; text-align: center; font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system, BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans, sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol, Noto Color Emoji; } img { width: 6rem; height: 6rem; margin: 0 auto 1rem; } h1 { font-size: 3.75rem; line-height: 1; color: rgba(31, 41, 55, 1); font-weight: 700; box-sizing: border-box; margin: 0 auto; } p, a { color: rgba(107, 114, 128, 1); font-size: 1.125rem; line-height: 1.75rem; max-width: 28rem; box-sizing: border-box; margin: 0 auto; } .dark main { background-color: rgb(11, 15, 25); } .dark h1 { color: rgb(209, 213, 219); } .dark p, .dark a { color: rgb(156, 163, 175); } </style> <script> // On page load or when changing themes, best to add inline in `head` to avoid FOUC const key = "_tb_global_settings"; let theme = window.matchMedia("(prefers-color-scheme: dark)").matches ? "dark" : "light"; try { const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme; if (storageTheme) { theme = storageTheme === "dark" ? "dark" : "light"; } } catch (e) {} if (theme === "dark") { document.documentElement.classList.add("dark"); } else { document.documentElement.classList.remove("dark"); } </script> </head> <body> <main> <img src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg" alt="" /> <div> <h1>429</h1> <p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p> </div> </main> </body> </html>
Imcf1Y3FSatM/gpt_0.125B_global_step1000_openassistant
Imcf1Y3FSatM
"2024-03-18T21:51:30Z"
124
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-02-21T06:07:40Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
vikasaeta/distilbert-base-uncased-finetuned-ner
vikasaeta
"2022-04-27T09:38:56Z"
8
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "dataset:few_nerd", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2022-04-24T12:23:12Z"
--- pipeline_tag: token-classification license: apache-2.0 tags: - generated_from_trainer datasets: - few_nerd metrics: - precision - recall - f1 - accuracy model-index: - name: distilbert-base-uncased-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: few_nerd type: few_nerd args: supervised metrics: - name: Precision type: precision value: 0.6424480067658478 - name: Recall type: recall value: 0.6854236732015421 - name: F1 type: f1 value: 0.6632404008334158 - name: Accuracy type: accuracy value: 0.9075199647113962 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the few_nerd dataset. It achieves the following results on the evaluation set: - Loss: 0.3136 - Precision: 0.6424 - Recall: 0.6854 - F1: 0.6632 - Accuracy: 0.9075 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.328 | 1.0 | 8236 | 0.3197 | 0.6274 | 0.6720 | 0.6489 | 0.9041 | | 0.2776 | 2.0 | 16472 | 0.3111 | 0.6433 | 0.6759 | 0.6592 | 0.9069 | | 0.241 | 3.0 | 24708 | 0.3136 | 0.6424 | 0.6854 | 0.6632 | 0.9075 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 2.1.0 - Tokenizers 0.12.1
nttx/a5ee2937-69a6-4426-996f-5aa17987c810
nttx
"2025-02-13T17:48:42Z"
0
0
peft
[ "peft", "safetensors", "gpt_neox", "axolotl", "generated_from_trainer", "base_model:EleutherAI/pythia-70m-deduped", "base_model:adapter:EleutherAI/pythia-70m-deduped", "license:apache-2.0", "region:us" ]
null
"2025-02-13T17:41:27Z"
--- library_name: peft license: apache-2.0 base_model: EleutherAI/pythia-70m-deduped tags: - axolotl - generated_from_trainer model-index: - name: a5ee2937-69a6-4426-996f-5aa17987c810 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: EleutherAI/pythia-70m-deduped bf16: auto chat_template: llama3 dataloader_num_workers: 12 dataset_prepared_path: null datasets: - data_files: - 5a568bfc958f4385_train_data.json ds_type: json format: custom path: /workspace/input_data/5a568bfc958f4385_train_data.json type: field_input: orig_response field_instruction: instruction field_output: output format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: 2 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 150 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: nttx/a5ee2937-69a6-4426-996f-5aa17987c810 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 32 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 16 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 900 micro_batch_size: 4 mlflow_experiment_name: /tmp/5a568bfc958f4385_train_data.json model_type: AutoModelForCausalLM num_epochs: 10 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1e-5 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 150 saves_per_epoch: null sequence_len: 512 special_tokens: pad_token: <|endoftext|> strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: e29108f4-d366-4632-bee1-91f4584ab380 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: e29108f4-d366-4632-bee1-91f4584ab380 warmup_steps: 50 weight_decay: 0.0 xformers_attention: null ``` </details><br> # a5ee2937-69a6-4426-996f-5aa17987c810 This model is a fine-tuned version of [EleutherAI/pythia-70m-deduped](https://huggingface.co/EleutherAI/pythia-70m-deduped) on the None dataset. It achieves the following results on the evaluation set: - Loss: 4.8074 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 50 - training_steps: 900 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0001 | 1 | 12.4252 | | 22.5635 | 0.0169 | 150 | 5.4802 | | 19.7155 | 0.0337 | 300 | 4.9590 | | 19.2023 | 0.0506 | 450 | 4.8739 | | 19.0622 | 0.0674 | 600 | 4.8344 | | 18.8558 | 0.0843 | 750 | 4.8139 | | 18.7662 | 0.1011 | 900 | 4.8074 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Sammarieo/whisper-tiny-ta
Sammarieo
"2023-04-16T02:32:12Z"
75
0
transformers
[ "transformers", "pytorch", "whisper", "automatic-speech-recognition", "hf-asr-leaderboard", "generated_from_trainer", "hi", "dataset:mozilla-foundation/jamaican_patio", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2023-04-14T05:03:34Z"
--- language: - hi license: apache-2.0 tags: - hf-asr-leaderboard - generated_from_trainer datasets: - mozilla-foundation/jamaican_patio model-index: - name: jamaican_asr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # jamaican_asr This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Jamaican Patio dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - training_steps: 100 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.29.0.dev0 - Pytorch 2.0.0+cu118 - Datasets 2.11.0 - Tokenizers 0.13.3
LeoChiuu/all-MiniLM-L6-v2
LeoChiuu
"2024-09-09T18:15:27Z"
10
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:560", "loss:CoSENTLoss", "arxiv:1908.10084", "base_model:sentence-transformers/all-MiniLM-L6-v2", "base_model:finetune:sentence-transformers/all-MiniLM-L6-v2", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
"2024-09-03T20:04:26Z"
--- base_model: sentence-transformers/all-MiniLM-L6-v2 datasets: [] language: [] library_name: sentence-transformers metrics: - cosine_accuracy - cosine_accuracy_threshold - cosine_f1 - cosine_f1_threshold - cosine_precision - cosine_recall - cosine_ap - dot_accuracy - dot_accuracy_threshold - dot_f1 - dot_f1_threshold - dot_precision - dot_recall - dot_ap - manhattan_accuracy - manhattan_accuracy_threshold - manhattan_f1 - manhattan_f1_threshold - manhattan_precision - manhattan_recall - manhattan_ap - euclidean_accuracy - euclidean_accuracy_threshold - euclidean_f1 - euclidean_f1_threshold - euclidean_precision - euclidean_recall - euclidean_ap - max_accuracy - max_accuracy_threshold - max_f1 - max_f1_threshold - max_precision - max_recall - max_ap pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:560 - loss:CoSENTLoss widget: - source_sentence: Let's search inside sentences: - Stuffed animal - Let's look inside - What is worse? - source_sentence: I want a torch sentences: - What do you think of Spike - Actually I want a torch - Why candle? - source_sentence: Magic trace sentences: - A sword. - ' Why is he so tiny?' - 'The flower is changed into flower. ' - source_sentence: Did you use illusion? sentences: - Do you use illusion? - You are a cat? - It's Toby - source_sentence: Do you see your scarf in the watering can? sentences: - What is the Weeping Tree? - Are these your footprints? - Magic user model-index: - name: SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2 results: - task: type: binary-classification name: Binary Classification dataset: name: custom arc semantics data type: custom-arc-semantics-data metrics: - type: cosine_accuracy value: 0.9285714285714286 name: Cosine Accuracy - type: cosine_accuracy_threshold value: 0.42927420139312744 name: Cosine Accuracy Threshold - type: cosine_f1 value: 0.9425287356321839 name: Cosine F1 - type: cosine_f1_threshold value: 0.2269928753376007 name: Cosine F1 Threshold - type: cosine_precision value: 0.9111111111111111 name: Cosine Precision - type: cosine_recall value: 0.9761904761904762 name: Cosine Recall - type: cosine_ap value: 0.9720863676601571 name: Cosine Ap - type: dot_accuracy value: 0.9285714285714286 name: Dot Accuracy - type: dot_accuracy_threshold value: 0.42927438020706177 name: Dot Accuracy Threshold - type: dot_f1 value: 0.9425287356321839 name: Dot F1 - type: dot_f1_threshold value: 0.22699296474456787 name: Dot F1 Threshold - type: dot_precision value: 0.9111111111111111 name: Dot Precision - type: dot_recall value: 0.9761904761904762 name: Dot Recall - type: dot_ap value: 0.9720863676601571 name: Dot Ap - type: manhattan_accuracy value: 0.9285714285714286 name: Manhattan Accuracy - type: manhattan_accuracy_threshold value: 16.630834579467773 name: Manhattan Accuracy Threshold - type: manhattan_f1 value: 0.9431818181818182 name: Manhattan F1 - type: manhattan_f1_threshold value: 19.740108489990234 name: Manhattan F1 Threshold - type: manhattan_precision value: 0.9021739130434783 name: Manhattan Precision - type: manhattan_recall value: 0.9880952380952381 name: Manhattan Recall - type: manhattan_ap value: 0.9728353486982702 name: Manhattan Ap - type: euclidean_accuracy value: 0.9285714285714286 name: Euclidean Accuracy - type: euclidean_accuracy_threshold value: 1.068155288696289 name: Euclidean Accuracy Threshold - type: euclidean_f1 value: 0.9425287356321839 name: Euclidean F1 - type: euclidean_f1_threshold value: 1.2433418035507202 name: Euclidean F1 Threshold - type: euclidean_precision value: 0.9111111111111111 name: Euclidean Precision - type: euclidean_recall value: 0.9761904761904762 name: Euclidean Recall - type: euclidean_ap value: 0.9720863676601571 name: Euclidean Ap - type: max_accuracy value: 0.9285714285714286 name: Max Accuracy - type: max_accuracy_threshold value: 16.630834579467773 name: Max Accuracy Threshold - type: max_f1 value: 0.9431818181818182 name: Max F1 - type: max_f1_threshold value: 19.740108489990234 name: Max F1 Threshold - type: max_precision value: 0.9111111111111111 name: Max Precision - type: max_recall value: 0.9880952380952381 name: Max Recall - type: max_ap value: 0.9728353486982702 name: Max Ap --- # SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) <!-- at revision 8b3219a92973c328a8e22fadcfa821b5dc75636a --> - **Maximum Sequence Length:** 256 tokens - **Output Dimensionality:** 384 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("LeoChiuu/all-MiniLM-L6-v2") # Run inference sentences = [ 'Do you see your scarf in the watering can?', 'Are these your footprints?', 'Magic user', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 384] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Binary Classification * Dataset: `custom-arc-semantics-data` * Evaluated with [<code>BinaryClassificationEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.BinaryClassificationEvaluator) | Metric | Value | |:-----------------------------|:-----------| | cosine_accuracy | 0.9286 | | cosine_accuracy_threshold | 0.4293 | | cosine_f1 | 0.9425 | | cosine_f1_threshold | 0.227 | | cosine_precision | 0.9111 | | cosine_recall | 0.9762 | | cosine_ap | 0.9721 | | dot_accuracy | 0.9286 | | dot_accuracy_threshold | 0.4293 | | dot_f1 | 0.9425 | | dot_f1_threshold | 0.227 | | dot_precision | 0.9111 | | dot_recall | 0.9762 | | dot_ap | 0.9721 | | manhattan_accuracy | 0.9286 | | manhattan_accuracy_threshold | 16.6308 | | manhattan_f1 | 0.9432 | | manhattan_f1_threshold | 19.7401 | | manhattan_precision | 0.9022 | | manhattan_recall | 0.9881 | | manhattan_ap | 0.9728 | | euclidean_accuracy | 0.9286 | | euclidean_accuracy_threshold | 1.0682 | | euclidean_f1 | 0.9425 | | euclidean_f1_threshold | 1.2433 | | euclidean_precision | 0.9111 | | euclidean_recall | 0.9762 | | euclidean_ap | 0.9721 | | max_accuracy | 0.9286 | | max_accuracy_threshold | 16.6308 | | max_f1 | 0.9432 | | max_f1_threshold | 19.7401 | | max_precision | 0.9111 | | max_recall | 0.9881 | | **max_ap** | **0.9728** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 560 training samples * Columns: <code>text1</code>, <code>text2</code>, and <code>label</code> * Approximate statistics based on the first 1000 samples: | | text1 | text2 | label | |:--------|:--------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:------------------------------------------------| | type | string | string | int | | details | <ul><li>min: 3 tokens</li><li>mean: 7.2 tokens</li><li>max: 18 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 7.26 tokens</li><li>max: 18 tokens</li></ul> | <ul><li>0: ~36.07%</li><li>1: ~63.93%</li></ul> | * Samples: | text1 | text2 | label | |:-----------------------------------------------------|:--------------------------------------------------------------------------|:---------------| | <code>When it was dinner</code> | <code>Dinner time</code> | <code>1</code> | | <code>Did you cook chicken noodle last night?</code> | <code>Did you make chicken noodle for dinner?</code> | <code>1</code> | | <code>Someone who can change item</code> | <code>Someone who uses magic that turns something into something. </code> | <code>1</code> | * Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "pairwise_cos_sim" } ``` ### Evaluation Dataset #### Unnamed Dataset * Size: 140 evaluation samples * Columns: <code>text1</code>, <code>text2</code>, and <code>label</code> * Approximate statistics based on the first 1000 samples: | | text1 | text2 | label | |:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:------------------------------------------------| | type | string | string | int | | details | <ul><li>min: 3 tokens</li><li>mean: 6.99 tokens</li><li>max: 18 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 7.29 tokens</li><li>max: 18 tokens</li></ul> | <ul><li>0: ~40.00%</li><li>1: ~60.00%</li></ul> | * Samples: | text1 | text2 | label | |:-----------------------------------------|:-----------------------------------------|:---------------| | <code>Let's check inside</code> | <code>Let's search inside</code> | <code>1</code> | | <code>Sohpie, are you okay?</code> | <code>Sophie Are you pressured?</code> | <code>0</code> | | <code>This wine glass is related.</code> | <code>This sword looks important.</code> | <code>0</code> | * Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "pairwise_cos_sim" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: epoch - `learning_rate`: 2e-05 - `num_train_epochs`: 13 - `warmup_ratio`: 0.1 - `fp16`: True - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: epoch - `prediction_loss_only`: True - `per_device_train_batch_size`: 8 - `per_device_eval_batch_size`: 8 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 13 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `eval_use_gather_object`: False - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | loss | custom-arc-semantics-data_max_ap | |:-----:|:----:|:-------------:|:------:|:--------------------------------:| | None | 0 | - | - | 0.9254 | | 1.0 | 70 | 2.9684 | 1.4087 | 0.9425 | | 2.0 | 140 | 1.4461 | 1.0942 | 0.9629 | | 3.0 | 210 | 0.6005 | 0.8398 | 0.9680 | | 4.0 | 280 | 0.3021 | 0.7577 | 0.9703 | | 5.0 | 350 | 0.2412 | 0.7216 | 0.9715 | | 6.0 | 420 | 0.1816 | 0.7538 | 0.9722 | | 7.0 | 490 | 0.1512 | 0.8049 | 0.9726 | | 8.0 | 560 | 0.1208 | 0.7602 | 0.9726 | | 9.0 | 630 | 0.0915 | 0.7286 | 0.9729 | | 10.0 | 700 | 0.0553 | 0.7072 | 0.9729 | | 11.0 | 770 | 0.0716 | 0.6984 | 0.9730 | | 12.0 | 840 | 0.0297 | 0.7063 | 0.9725 | | 13.0 | 910 | 0.0462 | 0.6997 | 0.9728 | ### Framework Versions - Python: 3.10.14 - Sentence Transformers: 3.0.1 - Transformers: 4.44.2 - PyTorch: 2.4.1+cu121 - Accelerate: 0.34.2 - Datasets: 2.20.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### CoSENTLoss ```bibtex @online{kexuefm-8847, title={CoSENT: A more efficient sentence vector scheme than Sentence-BERT}, author={Su Jianlin}, year={2022}, month={Jan}, url={https://kexue.fm/archives/8847}, } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
LarryAIDraw/eden
LarryAIDraw
"2023-08-06T20:58:56Z"
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
"2023-08-06T20:51:36Z"
--- license: creativeml-openrail-m --- https://civitai.com/models/123264/eden-honkai-impact-3rd-or-3-or-3rd
nhung03/5048b92e-4369-4158-938b-4347f8451cde
nhung03
"2025-01-21T11:50:31Z"
6
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:defog/sqlcoder-7b-2", "base_model:adapter:defog/sqlcoder-7b-2", "license:cc-by-sa-4.0", "8-bit", "bitsandbytes", "region:us" ]
null
"2025-01-21T11:35:38Z"
--- library_name: peft license: cc-by-sa-4.0 base_model: defog/sqlcoder-7b-2 tags: - axolotl - generated_from_trainer model-index: - name: 5048b92e-4369-4158-938b-4347f8451cde results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: defog/sqlcoder-7b-2 bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - fdd56d09ce656747_train_data.json ds_type: json format: custom path: /workspace/input_data/fdd56d09ce656747_train_data.json type: field_instruction: INSTRUCTION field_output: RESPONSE format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: nhung03/5048b92e-4369-4158-938b-4347f8451cde hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/fdd56d09ce656747_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 special_tokens: pad_token: </s> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: fecef9ac-e0fb-4174-87a6-ec0f3fcd1777 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: fecef9ac-e0fb-4174-87a6-ec0f3fcd1777 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 5048b92e-4369-4158-938b-4347f8451cde This model is a fine-tuned version of [defog/sqlcoder-7b-2](https://huggingface.co/defog/sqlcoder-7b-2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5537 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.5927 | 0.1960 | 200 | 0.5537 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
prxy5608/720d5173-eba3-471c-8734-1215e79596fa
prxy5608
"2025-01-20T11:40:48Z"
7
0
peft
[ "peft", "safetensors", "gemma2", "axolotl", "generated_from_trainer", "base_model:zake7749/gemma-2-2b-it-chinese-kyara-dpo", "base_model:adapter:zake7749/gemma-2-2b-it-chinese-kyara-dpo", "license:gemma", "region:us" ]
null
"2025-01-20T11:24:34Z"
--- library_name: peft license: gemma base_model: zake7749/gemma-2-2b-it-chinese-kyara-dpo tags: - axolotl - generated_from_trainer model-index: - name: 720d5173-eba3-471c-8734-1215e79596fa results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: zake7749/gemma-2-2b-it-chinese-kyara-dpo bf16: true chat_template: llama3 data_processes: 16 dataset_prepared_path: null datasets: - data_files: - bd478039791159ff_train_data.json ds_type: json format: custom path: /workspace/input_data/bd478039791159ff_train_data.json type: field_instruction: da field_output: da_bornholm format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: 5 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: prxy5608/720d5173-eba3-471c-8734-1215e79596fa hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 200 micro_batch_size: 8 mlflow_experiment_name: /tmp/bd478039791159ff_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1e-5 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 33087eb8-0ea1-4b92-b197-a6b8d4b2bff8 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 33087eb8-0ea1-4b92-b197-a6b8d4b2bff8 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 720d5173-eba3-471c-8734-1215e79596fa This model is a fine-tuned version of [zake7749/gemma-2-2b-it-chinese-kyara-dpo](https://huggingface.co/zake7749/gemma-2-2b-it-chinese-kyara-dpo) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.6940 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 6.958 | 0.0051 | 1 | 10.9355 | | 5.8628 | 0.2558 | 50 | 3.4423 | | 2.7156 | 0.5115 | 100 | 2.1658 | | 2.3461 | 0.7673 | 150 | 1.7854 | | 1.6855 | 1.0230 | 200 | 1.6940 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
ibm-research/dromedary-65b-lora-delta-v0
ibm-research
"2023-05-22T00:34:07Z"
0
1
null
[ "license:gpl", "region:us" ]
null
"2023-05-19T20:57:24Z"
--- license: gpl inference: false --- # Dromedary Model Card **NOTE: This "delta model" cannot be used directly.** Users have to apply it on top of the original LLaMA weights to get actual Dromedary weights. See https://github.com/IBM/Dromedary#model-weights for instructions. ## Model details <div align="center"> <img src="https://raw.githubusercontent.com/IBM/Dromedary/main/assets/images/dromedary_logo.svg" alt="Dromedary Logo"/> </div> **Model type:** Dromedary is an open-source self-aligned language model trained with minimal human supervision. The base language model is LLaMA-65b, based on the transformer architecture. **Model date:** Dromedary was trained between April 2023 and May 2023, but its knowledge only goes up until Sept-2021. **Organizations developing the model:** The Dromedary team as a joint effort between CMU and IBM. **Paper or resources for more information:** https://mitibmdemos.draco.res.ibm.com/dromedary **License:** LLaMA's Non-commercial bespoke license **Where to send questions or comments about the model:** https://github.com/IBM/Dromedary/issues ## Intended use **Primary intended uses:** The primary use of Dromedary is research on the alignment of large language models. **Primary intended users:** The primary intended users of the model are researchers in artificial intelligence. ## Delta weights We use the following configuration for the LoRA weights: ``` --lora_target_modules='[q_proj,k_proj,v_proj,o_proj]' \ --lora_r=16 \ ``` ## Training dataset Fewer than 300 lines of human annotations (including < 200 seed prompts, 16 generic principles, and 5 exemplars for in-context learning), ## Evaluation dataset We evaluate Dromedary on TruthfulQA and HHH Eval, as well as Vicuna benchmark questions.
lesso15/b6cd86bb-c0c0-4c9e-bc06-9415594c07fe
lesso15
"2025-01-31T09:22:37Z"
8
0
peft
[ "peft", "safetensors", "opt", "axolotl", "generated_from_trainer", "base_model:facebook/opt-1.3b", "base_model:adapter:facebook/opt-1.3b", "license:other", "8-bit", "bitsandbytes", "region:us" ]
null
"2025-01-31T09:04:51Z"
--- library_name: peft license: other base_model: facebook/opt-1.3b tags: - axolotl - generated_from_trainer model-index: - name: b6cd86bb-c0c0-4c9e-bc06-9415594c07fe results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: facebook/opt-1.3b bf16: auto chat_template: llama3 datasets: - data_files: - 63d50c91f49b8dbd_train_data.json ds_type: json format: custom path: /workspace/input_data/63d50c91f49b8dbd_train_data.json type: field_instruction: title_main field_output: texte format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: lesso15/b6cd86bb-c0c0-4c9e-bc06-9415594c07fe hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 32 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 16 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/63d50c91f49b8dbd_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 754165f7-3bff-4b3e-af84-28bcb557f094 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 754165f7-3bff-4b3e-af84-28bcb557f094 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # b6cd86bb-c0c0-4c9e-bc06-9415594c07fe This model is a fine-tuned version of [facebook/opt-1.3b](https://huggingface.co/facebook/opt-1.3b) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8103 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 7.7806 | 0.1957 | 200 | 1.8103 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
lesso09/d87f04dd-10bd-40d1-81f1-f2319596b3dd
lesso09
"2025-01-23T17:01:59Z"
6
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/SmolLM-360M", "base_model:adapter:unsloth/SmolLM-360M", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
"2025-01-23T16:55:55Z"
--- library_name: peft license: apache-2.0 base_model: unsloth/SmolLM-360M tags: - axolotl - generated_from_trainer model-index: - name: d87f04dd-10bd-40d1-81f1-f2319596b3dd results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/SmolLM-360M bf16: true chat_template: llama3 datasets: - data_files: - 54ad7f4d032727f3_train_data.json ds_type: json format: custom path: /workspace/input_data/54ad7f4d032727f3_train_data.json type: field_input: substituted_context field_instruction: question field_output: original_context format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: 2 eval_max_new_tokens: 128 eval_steps: 5 eval_table_size: null flash_attention: false fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: lesso09/d87f04dd-10bd-40d1-81f1-f2319596b3dd hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 25 micro_batch_size: 2 mlflow_experiment_name: /tmp/54ad7f4d032727f3_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 10 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 8f27d88d-97fd-4d47-ad5b-f2aa1f25172e wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 8f27d88d-97fd-4d47-ad5b-f2aa1f25172e warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # d87f04dd-10bd-40d1-81f1-f2319596b3dd This model is a fine-tuned version of [unsloth/SmolLM-360M](https://huggingface.co/unsloth/SmolLM-360M) on the None dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 25 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.0 | 0.0006 | 1 | nan | | 0.0 | 0.0029 | 5 | nan | | 0.0 | 0.0057 | 10 | nan | | 0.0 | 0.0086 | 15 | nan | | 0.0 | 0.0115 | 20 | nan | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
mradermacher/LLMJudge-Qwen2.5-3B-Instruct-Full-GGUF
mradermacher
"2025-03-16T23:04:43Z"
0
0
transformers
[ "transformers", "gguf", "en", "base_model:TreLiam/LLMJudge-Qwen2.5-3B-Instruct-Full", "base_model:quantized:TreLiam/LLMJudge-Qwen2.5-3B-Instruct-Full", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-03-16T22:27:06Z"
--- base_model: TreLiam/LLMJudge-Qwen2.5-3B-Instruct-Full language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/TreLiam/LLMJudge-Qwen2.5-3B-Instruct-Full <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/LLMJudge-Qwen2.5-3B-Instruct-Full-GGUF/resolve/main/LLMJudge-Qwen2.5-3B-Instruct-Full.Q2_K.gguf) | Q2_K | 1.4 | | | [GGUF](https://huggingface.co/mradermacher/LLMJudge-Qwen2.5-3B-Instruct-Full-GGUF/resolve/main/LLMJudge-Qwen2.5-3B-Instruct-Full.Q3_K_S.gguf) | Q3_K_S | 1.6 | | | [GGUF](https://huggingface.co/mradermacher/LLMJudge-Qwen2.5-3B-Instruct-Full-GGUF/resolve/main/LLMJudge-Qwen2.5-3B-Instruct-Full.Q3_K_M.gguf) | Q3_K_M | 1.7 | lower quality | | [GGUF](https://huggingface.co/mradermacher/LLMJudge-Qwen2.5-3B-Instruct-Full-GGUF/resolve/main/LLMJudge-Qwen2.5-3B-Instruct-Full.Q3_K_L.gguf) | Q3_K_L | 1.8 | | | [GGUF](https://huggingface.co/mradermacher/LLMJudge-Qwen2.5-3B-Instruct-Full-GGUF/resolve/main/LLMJudge-Qwen2.5-3B-Instruct-Full.IQ4_XS.gguf) | IQ4_XS | 1.9 | | | [GGUF](https://huggingface.co/mradermacher/LLMJudge-Qwen2.5-3B-Instruct-Full-GGUF/resolve/main/LLMJudge-Qwen2.5-3B-Instruct-Full.Q4_K_S.gguf) | Q4_K_S | 1.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/LLMJudge-Qwen2.5-3B-Instruct-Full-GGUF/resolve/main/LLMJudge-Qwen2.5-3B-Instruct-Full.Q4_K_M.gguf) | Q4_K_M | 2.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/LLMJudge-Qwen2.5-3B-Instruct-Full-GGUF/resolve/main/LLMJudge-Qwen2.5-3B-Instruct-Full.Q5_K_S.gguf) | Q5_K_S | 2.3 | | | [GGUF](https://huggingface.co/mradermacher/LLMJudge-Qwen2.5-3B-Instruct-Full-GGUF/resolve/main/LLMJudge-Qwen2.5-3B-Instruct-Full.Q5_K_M.gguf) | Q5_K_M | 2.3 | | | [GGUF](https://huggingface.co/mradermacher/LLMJudge-Qwen2.5-3B-Instruct-Full-GGUF/resolve/main/LLMJudge-Qwen2.5-3B-Instruct-Full.Q6_K.gguf) | Q6_K | 2.6 | very good quality | | [GGUF](https://huggingface.co/mradermacher/LLMJudge-Qwen2.5-3B-Instruct-Full-GGUF/resolve/main/LLMJudge-Qwen2.5-3B-Instruct-Full.Q8_0.gguf) | Q8_0 | 3.4 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/LLMJudge-Qwen2.5-3B-Instruct-Full-GGUF/resolve/main/LLMJudge-Qwen2.5-3B-Instruct-Full.f16.gguf) | f16 | 6.3 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
gokulsrinivasagan/bert_tiny_lda_wnli
gokulsrinivasagan
"2024-12-04T21:08:30Z"
105
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "base_model:gokulsrinivasagan/bert_tiny_lda", "base_model:finetune:gokulsrinivasagan/bert_tiny_lda", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-12-04T21:07:55Z"
--- library_name: transformers language: - en base_model: gokulsrinivasagan/bert_tiny_lda tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: bert_tiny_lda_wnli results: - task: name: Text Classification type: text-classification dataset: name: GLUE WNLI type: glue args: wnli metrics: - name: Accuracy type: accuracy value: 0.4225352112676056 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert_tiny_lda_wnli This model is a fine-tuned version of [gokulsrinivasagan/bert_tiny_lda](https://huggingface.co/gokulsrinivasagan/bert_tiny_lda) on the GLUE WNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.6951 - Accuracy: 0.4225 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7067 | 1.0 | 3 | 0.6987 | 0.5352 | | 0.6987 | 2.0 | 6 | 0.6951 | 0.4225 | | 0.691 | 3.0 | 9 | 0.7094 | 0.4507 | | 0.6956 | 4.0 | 12 | 0.7013 | 0.4366 | | 0.6931 | 5.0 | 15 | 0.7101 | 0.4225 | | 0.6977 | 6.0 | 18 | 0.7243 | 0.4085 | | 0.6907 | 7.0 | 21 | 0.7117 | 0.4085 | ### Framework versions - Transformers 4.46.3 - Pytorch 2.2.1+cu118 - Datasets 2.17.0 - Tokenizers 0.20.3
usm3d/tools
usm3d
"2024-11-04T16:05:20Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2024-03-14T08:38:08Z"
--- license: apache-2.0 --- # HoHo Tools Tools and utilities for the [S23DR competition](https://huggingface.co/spaces/usm3d/S23DR) and [HoHo Dataset](https://huggingface.co/datasets/usm3d/usm-training-data) ## Installation ```bash # pip install over ssh pip install git+ssh://[email protected]/usm3d/tools.git # pip install over http pip install git+http://hf.co/usm3d/tools.git # editable git clone http://hf.co/usm3d/tools cd tools pip install -e . ```
alicegoesdown/bb09f5a8-4297-4098-bd5a-518fcd00976a
alicegoesdown
"2025-02-01T22:11:03Z"
237
0
peft
[ "peft", "safetensors", "axolotl", "generated_from_trainer", "base_model:huggyllama/llama-7b", "base_model:adapter:huggyllama/llama-7b", "license:other", "region:us" ]
null
"2025-02-01T20:42:06Z"
--- library_name: peft license: other base_model: huggyllama/llama-7b tags: - axolotl - generated_from_trainer model-index: - name: c4b201cf-0eeb-4380-a91f-cd6329614a81 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora bf16: auto chat_template: llama3 dataset_prepared_path: null debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 8 gradient_checkpointing: true gradient_clipping: 0.1 group_by_length: false hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-07 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: linear max_steps: 200 micro_batch_size: 128 mlflow_experiment_name: /tmp/aed51b8e2c089967_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 4096 special_tokens: pad_token: </PAD> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.25 wandb_entity: null wandb_mode: online wandb_name: 6a8f76dd-7262-490a-905c-7b83c0f56891 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 6a8f76dd-7262-490a-905c-7b83c0f56891 warmup_steps: 5 weight_decay: 0.1 xformers_attention: true ``` </details><br> ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 128 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
ALivshits/Llama3_8B_lambada_plus_ATIS_50-merged
ALivshits
"2024-07-23T11:17:58Z"
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-07-23T11:12:23Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
roleplaiapp/Phi-4-ReasoningRP-Q2_K-GGUF
roleplaiapp
"2025-01-29T11:46:29Z"
7
0
transformers
[ "transformers", "gguf", "2-bit", "Q2_K", "llama-cpp", "phi", "reasoningrp", "text-generation", "endpoints_compatible", "region:us", "conversational" ]
text-generation
"2025-01-29T11:46:08Z"
--- library_name: transformers pipeline_tag: text-generation tags: - 2-bit - Q2_K - gguf - llama-cpp - phi - reasoningrp - text-generation --- # roleplaiapp/Phi-4-ReasoningRP-Q2_K-GGUF **Repo:** `roleplaiapp/Phi-4-ReasoningRP-Q2_K-GGUF` **Original Model:** `Phi-4-ReasoningRP` **Quantized File:** `Phi-4-ReasoningRP.Q2_K.gguf` **Quantization:** `GGUF` **Quantization Method:** `Q2_K` ## Overview This is a GGUF Q2_K quantized version of Phi-4-ReasoningRP ## Quantization By I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful. Andrew Webby @ [RolePlai](https://roleplai.app/).
huangjia/xlm-roberta-base-finetuned-panx-fr
huangjia
"2022-07-09T16:05:20Z"
4
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2022-07-09T16:00:27Z"
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-fr results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.fr metrics: - name: F1 type: f1 value: 0.8204272363150867 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.2739 - F1: 0.8204 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 96 | 0.3708 | 0.7672 | | 0.506 | 2.0 | 192 | 0.2967 | 0.8130 | | 0.506 | 3.0 | 288 | 0.2739 | 0.8204 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.2 - Datasets 1.18.4 - Tokenizers 0.10.3
jiaoqsh/mbart-large-50-finetuned-stocks-event-2
jiaoqsh
"2023-02-17T13:25:53Z"
8
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "summarization", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
"2023-02-17T13:18:08Z"
--- license: mit tags: - summarization - generated_from_trainer metrics: - rouge model-index: - name: mbart-large-50-finetuned-stocks-event-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mbart-large-50-finetuned-stocks-event-2 This model is a fine-tuned version of [facebook/mbart-large-50](https://huggingface.co/facebook/mbart-large-50) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1281 - Rouge1: 0.9005 - Rouge2: 0.8194 - Rougel: 0.9005 - Rougelsum: 0.9005 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:| | 3.9121 | 1.0 | 20 | 1.1223 | 0.1389 | 0.1111 | 0.1366 | 0.1377 | | 0.2649 | 2.0 | 40 | 0.1712 | 0.8218 | 0.6944 | 0.8241 | 0.8194 | | 0.0404 | 3.0 | 60 | 0.1892 | 0.9329 | 0.8611 | 0.9329 | 0.9329 | | 0.0176 | 4.0 | 80 | 0.1553 | 0.9236 | 0.8472 | 0.9236 | 0.9213 | | 0.0151 | 5.0 | 100 | 0.1848 | 0.8426 | 0.7454 | 0.8417 | 0.8426 | | 0.0117 | 6.0 | 120 | 0.1917 | 0.8727 | 0.7778 | 0.8727 | 0.8727 | | 0.0246 | 7.0 | 140 | 0.1366 | 0.9074 | 0.8333 | 0.9074 | 0.9074 | | 0.0018 | 8.0 | 160 | 0.1281 | 0.9005 | 0.8194 | 0.9005 | 0.9005 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
mlx-community/DeepSeek-R1-Distill-Qwen-14B
mlx-community
"2025-02-26T18:03:01Z"
1,567
1
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "mlx", "conversational", "base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-14B", "base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-14B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-01-20T17:22:46Z"
--- library_name: transformers tags: - mlx base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-14B --- # mlx-community/DeepSeek-R1-Distill-Qwen-14B The Model [mlx-community/DeepSeek-R1-Distill-Qwen-14B](https://huggingface.co/mlx-community/DeepSeek-R1-Distill-Qwen-14B) was converted to MLX format from [deepseek-ai/DeepSeek-R1-Distill-Qwen-14B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B) using mlx-lm version **0.21.1**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/DeepSeek-R1-Distill-Qwen-14B") prompt = "hello" if tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```
Shio-Koube/Tiny_random_DSv3
Shio-Koube
"2025-04-09T15:24:07Z"
0
0
null
[ "safetensors", "deepseek_v3", "region:us" ]
null
"2025-04-09T15:23:53Z"
<!DOCTYPE html> <html class="" lang="en"> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no" /> <meta name="description" content="We're on a journey to advance and democratize artificial intelligence through open source and open science." /> <meta property="fb:app_id" content="1321688464574422" /> <meta name="twitter:card" content="summary_large_image" /> <meta name="twitter:site" content="@huggingface" /> <meta property="og:title" content="Hugging Face - The AI community building the future." /> <meta property="og:type" content="website" /> <title>Hugging Face - The AI community building the future.</title> <style> body { margin: 0; } main { background-color: white; min-height: 100vh; padding: 7rem 1rem 8rem 1rem; text-align: center; font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system, BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans, sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol, Noto Color Emoji; } img { width: 6rem; height: 6rem; margin: 0 auto 1rem; } h1 { font-size: 3.75rem; line-height: 1; color: rgba(31, 41, 55, 1); font-weight: 700; box-sizing: border-box; margin: 0 auto; } p, a { color: rgba(107, 114, 128, 1); font-size: 1.125rem; line-height: 1.75rem; max-width: 28rem; box-sizing: border-box; margin: 0 auto; } .dark main { background-color: rgb(11, 15, 25); } .dark h1 { color: rgb(209, 213, 219); } .dark p, .dark a { color: rgb(156, 163, 175); } </style> <script> // On page load or when changing themes, best to add inline in `head` to avoid FOUC const key = "_tb_global_settings"; let theme = window.matchMedia("(prefers-color-scheme: dark)").matches ? "dark" : "light"; try { const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme; if (storageTheme) { theme = storageTheme === "dark" ? "dark" : "light"; } } catch (e) {} if (theme === "dark") { document.documentElement.classList.add("dark"); } else { document.documentElement.classList.remove("dark"); } </script> </head> <body> <main> <img src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg" alt="" /> <div> <h1>429</h1> <p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p> </div> </main> </body> </html>
assskelad/paraphrase-multilingual-mpnet-base-v2_hh_cos_sim
assskelad
"2024-04-04T19:44:00Z"
9
0
sentence-transformers
[ "sentence-transformers", "safetensors", "xlm-roberta", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
"2024-04-04T19:41:54Z"
--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # assskelad/paraphrase-multilingual-mpnet-base-v2_hh_cos_sim This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('assskelad/paraphrase-multilingual-mpnet-base-v2_hh_cos_sim') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('assskelad/paraphrase-multilingual-mpnet-base-v2_hh_cos_sim') model = AutoModel.from_pretrained('assskelad/paraphrase-multilingual-mpnet-base-v2_hh_cos_sim') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=assskelad/paraphrase-multilingual-mpnet-base-v2_hh_cos_sim) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 2732 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 5, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 1366, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
Nitral-Archive/Captain-Eris_Twighlight-Magnum-12B
Nitral-Archive
"2024-12-05T22:23:57Z"
7
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "conversational", "base_model:anthracite-org/magnum-v4-12b", "base_model:finetune:anthracite-org/magnum-v4-12b", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-12-05T22:16:17Z"
--- base_model: - anthracite-org/magnum-v4-12b - Nitral-archive/Captain_Eris-Twighlight-12B library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [anthracite-org/magnum-v4-12b](https://huggingface.co/anthracite-org/magnum-v4-12b) * [Nitral-archive/Captain_Eris-Twighlight-12B](https://huggingface.co/Nitral-archive/Captain_Eris-Twighlight-12B) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: Nitral-archive/Captain_Eris-Twighlight-12B layer_range: [0, 40] - model: anthracite-org/magnum-v4-12b layer_range: [0, 40] merge_method: slerp base_model: Nitral-archive/Captain_Eris-Twighlight-12B parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```
Jonjew/WinonaRyder1980
Jonjew
"2025-03-08T18:46:35Z"
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:unknown", "region:us" ]
text-to-image
"2025-03-08T18:46:30Z"
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: >- <lora:Winona_Ryder_Flux:1.2> This is a beautiful photograph of a woman, Brown hair cascading over her shoulder. She has short, dark brown hair styled in a modern, slightly tousled cut that frames her face, hand on hip. Wearing a boatneck dress. Standing in a cafe. Looking at the viewer. Smile output: url: images/00016-1256450138.png base_model: black-forest-labs/FLUX.1-dev instance_prompt: null license: unknown --- # Winona Ryder 1980 <Gallery /> ## Model description FROM https:&#x2F;&#x2F;civitai.com&#x2F;models&#x2F;1112882&#x2F;winona-ryder-198090s-flux?modelVersionId&#x3D;1250554 Strength 1 ## Download model Weights for this model are available in Safetensors format. [Download](/Jonjew/WinonaRyder1980/tree/main) them in the Files & versions tab.
aleegis09/e8477d59-b771-4df5-b284-057facf254ad
aleegis09
"2025-01-15T22:12:48Z"
5
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/llama-3-8b-Instruct", "base_model:adapter:unsloth/llama-3-8b-Instruct", "license:llama3", "region:us" ]
null
"2025-01-15T21:42:02Z"
--- library_name: peft license: llama3 base_model: unsloth/llama-3-8b-Instruct tags: - axolotl - generated_from_trainer model-index: - name: e8477d59-b771-4df5-b284-057facf254ad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/llama-3-8b-Instruct bf16: true chat_template: llama3 data_processes: 16 dataset_prepared_path: null datasets: - data_files: - 3861d7dc3ff9bd06_train_data.json ds_type: json format: custom path: /workspace/input_data/3861d7dc3ff9bd06_train_data.json type: field_instruction: prompt field_output: chosen format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: 5 eval_batch_size: 2 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: aleegis09/e8477d59-b771-4df5-b284-057facf254ad hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 200 micro_batch_size: 8 mlflow_experiment_name: /tmp/3861d7dc3ff9bd06_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1e-5 optimizer: adamw_torch output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 463a9d2e-a817-4437-b149-c70ea2aa6427 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 463a9d2e-a817-4437-b149-c70ea2aa6427 warmup_steps: 20 weight_decay: 0.0 xformers_attention: null ``` </details><br> # e8477d59-b771-4df5-b284-057facf254ad This model is a fine-tuned version of [unsloth/llama-3-8b-Instruct](https://huggingface.co/unsloth/llama-3-8b-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0253 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 20 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.5232 | 0.0043 | 1 | 1.8380 | | 1.429 | 0.2130 | 50 | 1.3885 | | 0.8175 | 0.4260 | 100 | 1.1358 | | 0.4118 | 0.6390 | 150 | 1.0368 | | 1.0597 | 0.8520 | 200 | 1.0253 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
allknowingroger/Gemma2pass-42B
allknowingroger
"2024-09-10T07:21:34Z"
7
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "mergekit", "merge", "conversational", "base_model:byroneverson/gemma-2-27b-it-abliterated", "base_model:finetune:byroneverson/gemma-2-27b-it-abliterated", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-09-10T07:08:08Z"
--- base_model: - byroneverson/gemma-2-27b-it-abliterated library_name: transformers tags: - mergekit - merge license: apache-2.0 --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * [byroneverson/gemma-2-27b-it-abliterated](https://huggingface.co/byroneverson/gemma-2-27b-it-abliterated) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: byroneverson/gemma-2-27b-it-abliterated layer_range: [0, 39] - sources: - model: byroneverson/gemma-2-27b-it-abliterated layer_range: [8, 39] merge_method: passthrough dtype: bfloat16 ```
milanvelinovski/gemma-2-2B-it-function_calling-V0
milanvelinovski
"2025-03-04T17:53:23Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:google/gemma-2-2b-it", "base_model:finetune:google/gemma-2-2b-it", "endpoints_compatible", "region:us" ]
null
"2025-03-04T17:52:14Z"
--- base_model: google/gemma-2-2b-it library_name: transformers model_name: gemma-2-2B-it-function_calling-V0 tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for gemma-2-2B-it-function_calling-V0 This model is a fine-tuned version of [google/gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="milanvelinovski/gemma-2-2B-it-function_calling-V0", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.15.2 - Transformers: 4.47.0 - Pytorch: 2.5.1+cu121 - Datasets: 3.3.1 - Tokenizers: 0.21.0 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
ishan24/Sana_1600M_1024px_BF16_ControlNet_diffusers
ishan24
"2025-03-20T11:23:34Z"
25
0
diffusers
[ "diffusers", "safetensors", "SanaControlNetPipeline", "text-to-image", "arxiv:2410.10629", "license:mit", "region:us" ]
text-to-image
"2025-03-12T11:03:46Z"
--- tags: - SanaControlNetPipeline pipeline_tag: text-to-image license: mit --- <p align="center" style="border-radius: 10px"> <img src="https://raw.githubusercontent.com/NVlabs/Sana/refs/heads/main/asset/logo.png" width="35%" alt="logo"/> </p> <div style="display:flex;justify-content: center"> <a href="https://huggingface.co/collections/Efficient-Large-Model/sana-673efba2a57ed99843f11f9e"><img src="https://img.shields.io/static/v1?label=Demo&message=Huggingface&color=yellow"></a> &ensp; <a href="https://github.com/NVlabs/Sana"><img src="https://img.shields.io/static/v1?label=Code&message=Github&color=blue&logo=github"></a> &ensp; <a href="https://nvlabs.github.io/Sana/"><img src="https://img.shields.io/static/v1?label=Project&message=Github&color=blue&logo=github-pages"></a> &ensp; <a href="https://hanlab.mit.edu/projects/sana/"><img src="https://img.shields.io/static/v1?label=Page&message=MIT&color=darkred&logo=github-pages"></a> &ensp; <a href="https://arxiv.org/abs/2410.10629"><img src="https://img.shields.io/static/v1?label=Arxiv&message=Sana&color=red&logo=arxiv"></a> &ensp; <a href="https://nv-sana.mit.edu/"><img src="https://img.shields.io/static/v1?label=Demo&message=MIT&color=yellow"></a> &ensp; <a href="https://discord.gg/rde6eaE5Ta"><img src="https://img.shields.io/static/v1?label=Discuss&message=Discord&color=purple&logo=discord"></a> &ensp; </div> # Model card We introduce **Sana**, a text-to-image framework that can efficiently generate images up to 4096 × 4096 resolution. Sana can synthesize high-resolution, high-quality images with strong text-image alignment at a remarkably fast speed, deployable on laptop GPU. Source code is available at https://github.com/NVlabs/Sana. ### 🧨 Diffusers ### 1. How to use `SanaControlNetPipeline` with `🧨diffusers` ```python # run `pip install git+https://github.com/huggingface/diffusers` before use Sana in diffusers import torch from diffusers import SanaControlNetModel, SanaControlNetPipeline from diffusers.utils import load_image controlnet = SanaControlNetModel.from_pretrained( "ishan24/Sana_1600M_1024px_BF16_ControlNet_diffusers", torch_dtype=torch.float16 ) pipe = SanaControlNetPipeline.from_pretrained( "Efficient-Large-Model/Sana_1600M_1024px_MultiLing_diffusers", variant="fp16", controlnet=controlnet, torch_dtype=torch.float16, ) pipe.to('cuda') pipe.vae.to(torch.bfloat16) pipe.text_encoder.to(torch.bfloat16) cond_image = load_image( "https://huggingface.co/ishan24/Sana_600M_1024px_ControlNet_diffusers/resolve/main/hed_example.png" ) prompt='a cat with a neon sign that says "Sana"' image = pipe( prompt, control_image=cond_image, ).images[0] image.save("sana.png") ```
jemal/Reinforce-CartPole-v1
jemal
"2024-01-28T05:12:14Z"
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
"2024-01-28T05:12:03Z"
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 498.30 +/- 5.10 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
Sivakumar/distilbert-base-uncased-finetuned-squad
Sivakumar
"2022-03-13T21:52:35Z"
11
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad_v2", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
"2022-03-13T17:08:45Z"
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad_v2 model-index: - name: distilbert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad_v2 dataset. It achieves the following results on the evaluation set: - Loss: 1.4101 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.2109 | 1.0 | 8235 | 1.2303 | | 0.9385 | 2.0 | 16470 | 1.2412 | | 0.7448 | 3.0 | 24705 | 1.4101 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.4 - Tokenizers 0.11.6
Dejavu2021/llama2-qlora-finetunined-french
Dejavu2021
"2023-10-10T09:07:28Z"
0
0
peft
[ "peft", "arxiv:1910.09700", "base_model:TinyPixel/Llama-2-7B-bf16-sharded", "base_model:adapter:TinyPixel/Llama-2-7B-bf16-sharded", "region:us" ]
null
"2023-10-10T05:20:32Z"
--- library_name: peft base_model: TinyPixel/Llama-2-7B-bf16-sharded --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.6.0.dev0
tensorblock/pygmalion-1.3b-GGUF
tensorblock
"2024-11-16T05:24:03Z"
80
0
null
[ "gguf", "text generation", "conversational", "TensorBlock", "GGUF", "en", "base_model:PygmalionAI/pygmalion-1.3b", "base_model:quantized:PygmalionAI/pygmalion-1.3b", "license:agpl-3.0", "region:us" ]
null
"2024-11-16T05:16:11Z"
--- license: agpl-3.0 language: - en tags: - text generation - conversational - TensorBlock - GGUF inference: false base_model: PygmalionAI/pygmalion-1.3b --- <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"> Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a> </p> </div> </div> ## PygmalionAI/pygmalion-1.3b - GGUF This repo contains GGUF format model files for [PygmalionAI/pygmalion-1.3b](https://huggingface.co/PygmalionAI/pygmalion-1.3b). The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d). <div style="text-align: left; margin: 20px 0;"> <a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;"> Run them on the TensorBlock client using your local machine ↗ </a> </div> ## Prompt template ``` ``` ## Model file specification | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [pygmalion-1.3b-Q2_K.gguf](https://huggingface.co/tensorblock/pygmalion-1.3b-GGUF/blob/main/pygmalion-1.3b-Q2_K.gguf) | Q2_K | 0.531 GB | smallest, significant quality loss - not recommended for most purposes | | [pygmalion-1.3b-Q3_K_S.gguf](https://huggingface.co/tensorblock/pygmalion-1.3b-GGUF/blob/main/pygmalion-1.3b-Q3_K_S.gguf) | Q3_K_S | 0.607 GB | very small, high quality loss | | [pygmalion-1.3b-Q3_K_M.gguf](https://huggingface.co/tensorblock/pygmalion-1.3b-GGUF/blob/main/pygmalion-1.3b-Q3_K_M.gguf) | Q3_K_M | 0.709 GB | very small, high quality loss | | [pygmalion-1.3b-Q3_K_L.gguf](https://huggingface.co/tensorblock/pygmalion-1.3b-GGUF/blob/main/pygmalion-1.3b-Q3_K_L.gguf) | Q3_K_L | 0.766 GB | small, substantial quality loss | | [pygmalion-1.3b-Q4_0.gguf](https://huggingface.co/tensorblock/pygmalion-1.3b-GGUF/blob/main/pygmalion-1.3b-Q4_0.gguf) | Q4_0 | 0.770 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [pygmalion-1.3b-Q4_K_S.gguf](https://huggingface.co/tensorblock/pygmalion-1.3b-GGUF/blob/main/pygmalion-1.3b-Q4_K_S.gguf) | Q4_K_S | 0.775 GB | small, greater quality loss | | [pygmalion-1.3b-Q4_K_M.gguf](https://huggingface.co/tensorblock/pygmalion-1.3b-GGUF/blob/main/pygmalion-1.3b-Q4_K_M.gguf) | Q4_K_M | 0.853 GB | medium, balanced quality - recommended | | [pygmalion-1.3b-Q5_0.gguf](https://huggingface.co/tensorblock/pygmalion-1.3b-GGUF/blob/main/pygmalion-1.3b-Q5_0.gguf) | Q5_0 | 0.922 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [pygmalion-1.3b-Q5_K_S.gguf](https://huggingface.co/tensorblock/pygmalion-1.3b-GGUF/blob/main/pygmalion-1.3b-Q5_K_S.gguf) | Q5_K_S | 0.922 GB | large, low quality loss - recommended | | [pygmalion-1.3b-Q5_K_M.gguf](https://huggingface.co/tensorblock/pygmalion-1.3b-GGUF/blob/main/pygmalion-1.3b-Q5_K_M.gguf) | Q5_K_M | 0.984 GB | large, very low quality loss - recommended | | [pygmalion-1.3b-Q6_K.gguf](https://huggingface.co/tensorblock/pygmalion-1.3b-GGUF/blob/main/pygmalion-1.3b-Q6_K.gguf) | Q6_K | 1.084 GB | very large, extremely low quality loss | | [pygmalion-1.3b-Q8_0.gguf](https://huggingface.co/tensorblock/pygmalion-1.3b-GGUF/blob/main/pygmalion-1.3b-Q8_0.gguf) | Q8_0 | 1.403 GB | very large, extremely low quality loss - not recommended | ## Downloading instruction ### Command line Firstly, install Huggingface Client ```shell pip install -U "huggingface_hub[cli]" ``` Then, downoad the individual model file the a local directory ```shell huggingface-cli download tensorblock/pygmalion-1.3b-GGUF --include "pygmalion-1.3b-Q2_K.gguf" --local-dir MY_LOCAL_DIR ``` If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try: ```shell huggingface-cli download tensorblock/pygmalion-1.3b-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf' ```
HPLT/sft-fpft-cs-bloom-7b1
HPLT
"2025-04-04T10:33:33Z"
6
0
transformers
[ "transformers", "pytorch", "safetensors", "bloom", "text-generation", "generation", "question answering", "instruction tuning", "cs", "arxiv:2309.08958", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-04-04T19:57:37Z"
--- language: - cs tags: - generation - question answering - instruction tuning license: cc-by-nc-4.0 --- ### Model Description This HF repository contains base LLMs instruction tuned (SFT) with full-parameter fine-tuning and then used to study whether monolingual or multilingual instruction tuning is more favourable. * [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main) * [Paper](https://arxiv.org/abs/2309.08958) #### Instruction tuning details * Base model: [bloom-7b1](https://huggingface.co/bloom-7b1) * Instruction tuning language: Czech * Training method: full-parameter fine-tuning. * Best checkpoint: best cross-entropy on a validation set, trained for 3 epochs. * Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data). #### Usage The model checkpoint should be loaded using `transformers` library. Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/fpft) for inference and training instructions. #### Citation ``` @inproceedings{chen-etal-2024-monolingual, title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}", author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield", year="2024", booktitle = "Findings of the Association for Computational Linguistics: EACL 2024", } ```
NasimB/guten-rarity-all-2p5k-log-rarity-all-sort
NasimB
"2023-07-15T11:10:36Z"
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2023-07-15T09:18:12Z"
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: guten-rarity-all-2p5k-log-rarity-all-sort results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # guten-rarity-all-2p5k-log-rarity-all-sort This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 4.3117 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 6.69 | 0.29 | 500 | 5.6272 | | 5.3349 | 0.59 | 1000 | 5.1982 | | 4.9818 | 0.88 | 1500 | 4.9441 | | 4.7024 | 1.17 | 2000 | 4.7940 | | 4.5531 | 1.47 | 2500 | 4.6766 | | 4.4445 | 1.76 | 3000 | 4.5629 | | 4.3064 | 2.05 | 3500 | 4.4888 | | 4.12 | 2.35 | 4000 | 4.4409 | | 4.0994 | 2.64 | 4500 | 4.3854 | | 4.0596 | 2.93 | 5000 | 4.3289 | | 3.8415 | 3.23 | 5500 | 4.3258 | | 3.7949 | 3.52 | 6000 | 4.2992 | | 3.7753 | 3.81 | 6500 | 4.2626 | | 3.6705 | 4.11 | 7000 | 4.2631 | | 3.5128 | 4.4 | 7500 | 4.2550 | | 3.5022 | 4.69 | 8000 | 4.2439 | | 3.4902 | 4.99 | 8500 | 4.2293 | | 3.3248 | 5.28 | 9000 | 4.2426 | | 3.3111 | 5.57 | 9500 | 4.2419 | | 3.3138 | 5.87 | 10000 | 4.2408 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
mradermacher/MT-Merge2-gemma-2-9B-GGUF
mradermacher
"2024-11-28T06:05:05Z"
8
1
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:zelk12/MT-Merge2-gemma-2-9B", "base_model:quantized:zelk12/MT-Merge2-gemma-2-9B", "endpoints_compatible", "region:us", "conversational" ]
null
"2024-11-28T04:53:50Z"
--- base_model: zelk12/MT-Merge2-gemma-2-9B language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/zelk12/MT-Merge2-gemma-2-9B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/MT-Merge2-gemma-2-9B-GGUF/resolve/main/MT-Merge2-gemma-2-9B.Q2_K.gguf) | Q2_K | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/MT-Merge2-gemma-2-9B-GGUF/resolve/main/MT-Merge2-gemma-2-9B.Q3_K_S.gguf) | Q3_K_S | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/MT-Merge2-gemma-2-9B-GGUF/resolve/main/MT-Merge2-gemma-2-9B.Q3_K_M.gguf) | Q3_K_M | 4.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/MT-Merge2-gemma-2-9B-GGUF/resolve/main/MT-Merge2-gemma-2-9B.Q3_K_L.gguf) | Q3_K_L | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/MT-Merge2-gemma-2-9B-GGUF/resolve/main/MT-Merge2-gemma-2-9B.IQ4_XS.gguf) | IQ4_XS | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/MT-Merge2-gemma-2-9B-GGUF/resolve/main/MT-Merge2-gemma-2-9B.Q4_0_4_4.gguf) | Q4_0_4_4 | 5.5 | fast on arm, low quality | | [GGUF](https://huggingface.co/mradermacher/MT-Merge2-gemma-2-9B-GGUF/resolve/main/MT-Merge2-gemma-2-9B.Q4_K_S.gguf) | Q4_K_S | 5.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MT-Merge2-gemma-2-9B-GGUF/resolve/main/MT-Merge2-gemma-2-9B.Q4_K_M.gguf) | Q4_K_M | 5.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MT-Merge2-gemma-2-9B-GGUF/resolve/main/MT-Merge2-gemma-2-9B.Q5_K_S.gguf) | Q5_K_S | 6.6 | | | [GGUF](https://huggingface.co/mradermacher/MT-Merge2-gemma-2-9B-GGUF/resolve/main/MT-Merge2-gemma-2-9B.Q5_K_M.gguf) | Q5_K_M | 6.7 | | | [GGUF](https://huggingface.co/mradermacher/MT-Merge2-gemma-2-9B-GGUF/resolve/main/MT-Merge2-gemma-2-9B.Q6_K.gguf) | Q6_K | 7.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/MT-Merge2-gemma-2-9B-GGUF/resolve/main/MT-Merge2-gemma-2-9B.Q8_0.gguf) | Q8_0 | 9.9 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/MT-Merge2-gemma-2-9B-GGUF/resolve/main/MT-Merge2-gemma-2-9B.f16.gguf) | f16 | 18.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Eli-Hindi-v0.1-GGUF
mradermacher
"2024-06-10T22:42:38Z"
29
0
transformers
[ "transformers", "gguf", "hindi", "bilingual", "hi", "en", "base_model:Neohumans-ai/Eli-Hindi-v0.1", "base_model:quantized:Neohumans-ai/Eli-Hindi-v0.1", "license:llama2", "endpoints_compatible", "region:us", "conversational" ]
null
"2024-06-10T21:38:03Z"
--- base_model: Neohumans-ai/Eli-Hindi-v0.1 language: - hi - en library_name: transformers license: llama2 quantized_by: mradermacher tags: - hindi - bilingual --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Neohumans-ai/Eli-Hindi-v0.1 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Eli-Hindi-v0.1-GGUF/resolve/main/Eli-Hindi-v0.1.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Eli-Hindi-v0.1-GGUF/resolve/main/Eli-Hindi-v0.1.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Eli-Hindi-v0.1-GGUF/resolve/main/Eli-Hindi-v0.1.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Eli-Hindi-v0.1-GGUF/resolve/main/Eli-Hindi-v0.1.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Eli-Hindi-v0.1-GGUF/resolve/main/Eli-Hindi-v0.1.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Eli-Hindi-v0.1-GGUF/resolve/main/Eli-Hindi-v0.1.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Eli-Hindi-v0.1-GGUF/resolve/main/Eli-Hindi-v0.1.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Eli-Hindi-v0.1-GGUF/resolve/main/Eli-Hindi-v0.1.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Eli-Hindi-v0.1-GGUF/resolve/main/Eli-Hindi-v0.1.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Eli-Hindi-v0.1-GGUF/resolve/main/Eli-Hindi-v0.1.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Eli-Hindi-v0.1-GGUF/resolve/main/Eli-Hindi-v0.1.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Eli-Hindi-v0.1-GGUF/resolve/main/Eli-Hindi-v0.1.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Eli-Hindi-v0.1-GGUF/resolve/main/Eli-Hindi-v0.1.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Eli-Hindi-v0.1-GGUF/resolve/main/Eli-Hindi-v0.1.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Eli-Hindi-v0.1-GGUF/resolve/main/Eli-Hindi-v0.1.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
purpleor/autotrain-V2-Proedge-New-Over-3
purpleor
"2024-05-04T03:58:56Z"
105
0
transformers
[ "transformers", "tensorboard", "safetensors", "deberta-v2", "text-classification", "autotrain", "dataset:autotrain-V2-Proedge-New-Over-3/autotrain-data", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-05-03T21:21:01Z"
--- tags: - autotrain - text-classification widget: - text: "I love AutoTrain" datasets: - autotrain-V2-Proedge-New-Over-3/autotrain-data --- # Model Trained Using AutoTrain - Problem type: Text Classification ## Validation Metrics loss: 0.06749869138002396 f1: 0.9854706836363941 precision: 0.9925711439594115 recall: 0.9784710895110016 auc: 0.9971283884434711 accuracy: 0.9847107747804957
douglch/dqn-SpaceInvadersNoFrameskip-v4
douglch
"2023-05-24T07:43:39Z"
2
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
"2023-05-24T07:42:55Z"
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 662.00 +/- 198.67 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga douglch -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga douglch -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga douglch ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 10000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
th135/medalpaca-13b_gen_n900
th135
"2024-04-22T19:08:55Z"
4
0
peft
[ "peft", "safetensors", "llama", "generated_from_trainer", "base_model:medalpaca/medalpaca-13b", "base_model:adapter:medalpaca/medalpaca-13b", "license:cc", "region:us" ]
null
"2024-04-22T19:08:53Z"
--- license: cc library_name: peft tags: - generated_from_trainer base_model: medalpaca/medalpaca-13b model-index: - name: medalpaca-13b_gen_n900 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # medalpaca-13b_gen_n900 This model is a fine-tuned version of [medalpaca/medalpaca-13b](https://huggingface.co/medalpaca/medalpaca-13b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
TheBloke/Yi-34B-200K-DARE-merge-v5-GPTQ
TheBloke
"2023-12-19T21:30:45Z"
9
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "merge", "en", "base_model:brucethemoose/Yi-34B-200K-DARE-merge-v5", "base_model:quantized:brucethemoose/Yi-34B-200K-DARE-merge-v5", "license:other", "autotrain_compatible", "4-bit", "gptq", "region:us" ]
text-generation
"2023-12-19T18:28:15Z"
--- base_model: brucethemoose/Yi-34B-200K-DARE-merge-v5 inference: false language: - en library_name: transformers license: other license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE license_name: yi-license model_creator: brucethemoose model_name: Yi 34B 200K DARE Merge v5 model_type: yi pipeline_tag: text-generation prompt_template: 'SYSTEM: {system_message} USER: {prompt} ASSISTANT: ' quantized_by: TheBloke tags: - text-generation-inference - merge --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Yi 34B 200K DARE Merge v5 - GPTQ - Model creator: [brucethemoose](https://huggingface.co/brucethemoose) - Original model: [Yi 34B 200K DARE Merge v5](https://huggingface.co/brucethemoose/Yi-34B-200K-DARE-merge-v5) <!-- description start --> # Description This repo contains GPTQ model files for [brucethemoose's Yi 34B 200K DARE Merge v5](https://huggingface.co/brucethemoose/Yi-34B-200K-DARE-merge-v5). Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them. These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Yi-34B-200K-DARE-merge-v5-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Yi-34B-200K-DARE-merge-v5-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Yi-34B-200K-DARE-merge-v5-GGUF) * [brucethemoose's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/brucethemoose/Yi-34B-200K-DARE-merge-v5) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Orca-Vicuna ``` SYSTEM: {system_message} USER: {prompt} ASSISTANT: ``` <!-- prompt-template end --> <!-- README_GPTQ.md-compatible clients start --> ## Known compatible clients / servers GPTQ models are currently supported on Linux (NVidia/AMD) and Windows (NVidia only). macOS users: please use GGUF models. These GPTQ models are known to work in the following inference servers/webuis. - [text-generation-webui](https://github.com/oobabooga/text-generation-webui) - [KoboldAI United](https://github.com/henk717/koboldai) - [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui) - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) This may not be a complete list; if you know of others, please let me know! <!-- README_GPTQ.md-compatible clients end --> <!-- README_GPTQ.md-provided-files start --> ## Provided files, and GPTQ parameters Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements. Each separate quant is in a different branch. See below for instructions on fetching from different branches. Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers. <details> <summary>Explanation of GPTQ parameters</summary> - Bits: The bit size of the quantised model. - GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value. - Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now. - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy. - GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s). - Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences. - ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit. </details> | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc | | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- | | [main](https://huggingface.co/TheBloke/Yi-34B-200K-DARE-merge-v5-GPTQ/tree/main) | 4 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 18.60 GB | Yes | 4-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/Yi-34B-200K-DARE-merge-v5-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 19.25 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. | | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Yi-34B-200K-DARE-merge-v5-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 21.21 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. | | [gptq-3bit-128g-actorder_True](https://huggingface.co/TheBloke/Yi-34B-200K-DARE-merge-v5-GPTQ/tree/gptq-3bit-128g-actorder_True) | 3 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 15.03 GB | No | 3-bit, with group size 128g and act-order. Higher quality than 128g-False. | | [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Yi-34B-200K-DARE-merge-v5-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 35.34 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-3bit-32g-actorder_True](https://huggingface.co/TheBloke/Yi-34B-200K-DARE-merge-v5-GPTQ/tree/gptq-3bit-32g-actorder_True) | 3 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 16.90 GB | No | 3-bit, with group size 64g and act-order. Highest quality 3-bit option. | | [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Yi-34B-200K-DARE-merge-v5-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 36.11 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. | <!-- README_GPTQ.md-provided-files end --> <!-- README_GPTQ.md-download-from-branches start --> ## How to download, including from branches ### In text-generation-webui To download from the `main` branch, enter `TheBloke/Yi-34B-200K-DARE-merge-v5-GPTQ` in the "Download model" box. To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/Yi-34B-200K-DARE-merge-v5-GPTQ:gptq-4bit-128g-actorder_True` ### From the command line I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` To download the `main` branch to a folder called `Yi-34B-200K-DARE-merge-v5-GPTQ`: ```shell mkdir Yi-34B-200K-DARE-merge-v5-GPTQ huggingface-cli download TheBloke/Yi-34B-200K-DARE-merge-v5-GPTQ --local-dir Yi-34B-200K-DARE-merge-v5-GPTQ --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: ```shell mkdir Yi-34B-200K-DARE-merge-v5-GPTQ huggingface-cli download TheBloke/Yi-34B-200K-DARE-merge-v5-GPTQ --revision gptq-4bit-128g-actorder_True --local-dir Yi-34B-200K-DARE-merge-v5-GPTQ --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model. The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`. For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell mkdir Yi-34B-200K-DARE-merge-v5-GPTQ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Yi-34B-200K-DARE-merge-v5-GPTQ --local-dir Yi-34B-200K-DARE-merge-v5-GPTQ --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ### With `git` (**not** recommended) To clone a specific branch with `git`, use a command like this: ```shell git clone --single-branch --branch gptq-4bit-128g-actorder_True https://huggingface.co/TheBloke/Yi-34B-200K-DARE-merge-v5-GPTQ ``` Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.) <!-- README_GPTQ.md-download-from-branches end --> <!-- README_GPTQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui) Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/Yi-34B-200K-DARE-merge-v5-GPTQ`. - To download from a specific branch, enter for example `TheBloke/Yi-34B-200K-DARE-merge-v5-GPTQ:gptq-4bit-128g-actorder_True` - see Provided Files above for the list of branches for each option. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `Yi-34B-200K-DARE-merge-v5-GPTQ` 7. The model will automatically load, and is now ready for use! 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. - Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`. 9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started! <!-- README_GPTQ.md-text-generation-webui end --> <!-- README_GPTQ.md-use-from-tgi start --> ## Serving this model from Text Generation Inference (TGI) It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/Yi-34B-200K-DARE-merge-v5-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''SYSTEM: {system_message} USER: {prompt} ASSISTANT: ''' client = InferenceClient(endpoint_url) response = client.text_generation(prompt, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1) print(f"Model output: {response}") ``` <!-- README_GPTQ.md-use-from-tgi end --> <!-- README_GPTQ.md-use-from-python start --> ## Python code example: inference from this GPTQ model ### Install the necessary packages Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later. ```shell pip3 install --upgrade transformers optimum # If using PyTorch 2.1 + CUDA 12.x: pip3 install --upgrade auto-gptq # or, if using PyTorch 2.1 + CUDA 11.x: pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ ``` If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source: ```shell pip3 uninstall -y auto-gptq git clone https://github.com/PanQiWei/AutoGPTQ cd AutoGPTQ git checkout v0.5.1 pip3 install . ``` ### Example Python code ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_name_or_path = "TheBloke/Yi-34B-200K-DARE-merge-v5-GPTQ" # To use a different branch, change revision # For example: revision="gptq-4bit-128g-actorder_True" model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", trust_remote_code=False, revision="main") tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) prompt = "Write a story about llamas" system_message = "You are a story writing assistant" prompt_template=f'''SYSTEM: {system_message} USER: {prompt} ASSISTANT: ''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) ``` <!-- README_GPTQ.md-use-from-python end --> <!-- README_GPTQ.md-compatibility start --> ## Compatibility The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly. [ExLlama](https://github.com/turboderp/exllama) is compatible with Llama architecture models (including Mistral, Yi, DeepSeek, SOLAR, etc) in 4-bit. Please see the Provided Files table above for per-file compatibility. For a list of clients/servers, please see "Known compatible clients / servers", above. <!-- README_GPTQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: brucethemoose's Yi 34B 200K DARE Merge v5 [**Nous-Capybara-34B**](https://huggingface.co/NousResearch/Nous-Capybara-34B/), [**Tess-M-v1.4**](https://huggingface.co/migtissera/Tess-34B-v1.4), [**Airoboros-3_1-yi-34b-200k**](https://huggingface.co/bhenrym14/airoboros-3_1-yi-34b-200k), [**PlatYi-34B-200K-Q**](https://huggingface.co/kyujinpy/PlatYi-34B-200k-Q-FastChat), [**Pallas-0.4**](https://huggingface.co/Mihaiii/Pallas-0.4), [**Yi-34B-200K-AEZAKMI-v2**](https://huggingface.co/adamo1139/Yi-34B-200K-AEZAKMI-v2), and a tiny bit of [**SUS-Chat-34B**](https://huggingface.co/SUSTech/SUS-Chat-34B) merged with a new, experimental implementation of "dare ties" via mergekit. See: > [Language Models are Super Mario: Absorbing Abilities from Homologous Models as a Free Lunch](https://github.com/yule-BUAA/MergeLM) > https://github.com/cg123/mergekit/tree/dare *** ## Prompt template: Orca-Vicuna ``` SYSTEM: {system_message} USER: {prompt} ASSISTANT: ``` It might recognize ChatML, or maybe Llama-chat from Airoboros. Sometimes the model "spells out" the stop token as `</s>` like Capybara, so you may need to add `</s>` as an additional stopping condition. *** ## Running Being a Yi model, try running a lower temperature with 0.05-0.1 MinP, a little repetition penalty, and no other samplers. Yi tends to run "hot" by default, and it really needs MinP to cull the huge vocabulary. 24GB GPUs can run Yi-34B-200K models at **45K-75K context** with exllamav2, and performant UIs like [exui](https://github.com/turboderp/exui). I go into more detail in this [post](https://old.reddit.com/r/LocalLLaMA/comments/1896igc/how_i_run_34b_models_at_75k_context_on_24gb_fast/) I recommend exl2 quantizations profiled on data similar to the desired task. It is especially sensitive to the quantization data at low bpw. I've published my own fiction-oriented quantizations here: https://huggingface.co/collections/brucethemoose/most-recent-merge-65742644ca03b6c514afa204 To load this in full-context backends like transformers, you *must* change `max_position_embeddings` in config.json to a lower value than 200,000, otherwise you will OOM! *** ## Testing Notes Merged in mergekit with the following config, and the tokenizer from chargoddard's Yi-Llama: ``` models: - model: /home/alpha/Storage/Models/Raw/chargoddard_Yi-34B-200K-Llama # No parameters necessary for base model - model: /home/alpha/Storage/Models/Raw/migtissera_Tess-34B-v1.4 # Less weight than previous merge since Pallas is a finetune of Tess parameters: weight: 0.14 density: 0.62 - model: /home/alpha/FastModels/Mihaiii_Pallas-0.4 parameters: weight: 0.14 density: 0.62 - model: /home/alpha//Storage/Models/Raw/bhenrym14_airoboros-3_1-yi-34b-200k parameters: weight: 0.14 density: 0.52 - model: /home/alpha/Storage/Models/Raw/Nous-Capybara-34B parameters: weight: 0.22 density: 0.62 - model: /home/alpha/Storage/Models/Raw/kyujinpy_PlatYi-34B-200k-Q-FastChat parameters: weight: 0.14 density: 0.52 #- model: /home/alpha/Storage/Models/Raw/ehartford_dolphin-2.2-yi-34b-200k # Dolphin 200K seems to be broken according to multiple leaderboards and perplexity tests? # parameters: # weight: 0.15 # density: 0.6 - model: /home/alpha/Models/Raw/adamo1139_Yi-34B-200K-AEZAKMI-v2 parameters: weight: 0.14 density: 0.52 - model: /home/alpha/Models/Raw/SUSTech_SUS-Chat-34B/ # Very low density and low weight since its a Yi 4K finetune, to try and preserve long context performance while "keeping" some of SUS parameters: weight: 0.08 density: 0.08 merge_method: dare_ties base_model: /home/alpha/Storage/Models/Raw/chargoddard_Yi-34B-200K-Llama parameters: int8_mask: true dtype: bfloat16 ``` Various densities were tested with perplexity tests and long context prompts. Relatively high densities seem to perform better, contrary to the findings of the Super Mario paper. This particular version is merged with more than the "recommended" max density of 0.5. It seems to result in even better perplexity, but I'm not sure if this translates to better output. Weights that add up to 1 seems to be optimal. Dare Ties is also resulting in seemingly better, lower perplexity merges than a regular ties merge, task arithmetic or a slerp merge. SUS Chat is not a 200K model, hence it was merged at a very low density to try and preserve Yi 200K's long context performance while still inheriting some of SUS's performance. Dolphin 200K was taken out of this merge because it seems to be performing poorly for a 34B Dolphin model, like something went wrong during training? I chose not to include other finetunes because they aren't trained on the 200K base. If any other 200K finetunes pop up, let me know. *** ## Credits: https://github.com/cg123/mergekit/tree/dare https://huggingface.co/NousResearch/Nous-Capybara-34B/ https://huggingface.co/bhenrym14/airoboros-3_1-yi-34b-200k https://huggingface.co/migtissera/Tess-M-v1.4 https://huggingface.co/kyujinpy/PlatYi-34B-200k-Q-FastChat https://huggingface.co/adamo1139/Yi-34B-200K-AEZAKMI-v2 https://huggingface.co/Mihaiii/Pallas-0.4 https://huggingface.co/SUSTech/SUS-Chat-34B https://huggingface.co/chargoddard/Yi-34B-200K-Llama https://huggingface.co/01-ai/Yi-34B-200K
TheBloke/Wizard-Vicuna-30B-Uncensored-GGUF
TheBloke
"2023-09-27T12:52:31Z"
6,666
49
transformers
[ "transformers", "gguf", "llama", "uncensored", "en", "dataset:ehartford/wizard_vicuna_70k_unfiltered", "base_model:cognitivecomputations/Wizard-Vicuna-30B-Uncensored", "base_model:quantized:cognitivecomputations/Wizard-Vicuna-30B-Uncensored", "license:other", "region:us" ]
null
"2023-09-19T23:02:39Z"
--- language: - en license: other tags: - uncensored datasets: - ehartford/wizard_vicuna_70k_unfiltered model_name: Wizard Vicuna 30B Uncensored base_model: ehartford/Wizard-Vicuna-30B-Uncensored inference: false model_creator: Eric Hartford model_type: llama prompt_template: 'A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user''s questions. USER: {prompt} ASSISTANT: ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Wizard Vicuna 30B Uncensored - GGUF - Model creator: [Eric Hartford](https://huggingface.co/ehartford) - Original model: [Wizard Vicuna 30B Uncensored](https://huggingface.co/ehartford/Wizard-Vicuna-30B-Uncensored) <!-- description start --> ## Description This repo contains GGUF format model files for [Eric Hartford's Wizard-Vicuna-30B-Uncensored](https://huggingface.co/ehartford/Wizard-Vicuna-30B-Uncensored). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Wizard-Vicuna-30B-Uncensored-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Wizard-Vicuna-30B-Uncensored-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Wizard-Vicuna-30B-Uncensored-GGUF) * [Eric Hartford's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Wizard-Vicuna-30B-Uncensored-fp16) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Vicuna ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [Wizard-Vicuna-30B-Uncensored.Q2_K.gguf](https://huggingface.co/TheBloke/Wizard-Vicuna-30B-Uncensored-GGUF/blob/main/Wizard-Vicuna-30B-Uncensored.Q2_K.gguf) | Q2_K | 2 | 13.50 GB| 16.00 GB | smallest, significant quality loss - not recommended for most purposes | | [Wizard-Vicuna-30B-Uncensored.Q3_K_S.gguf](https://huggingface.co/TheBloke/Wizard-Vicuna-30B-Uncensored-GGUF/blob/main/Wizard-Vicuna-30B-Uncensored.Q3_K_S.gguf) | Q3_K_S | 3 | 14.06 GB| 16.56 GB | very small, high quality loss | | [Wizard-Vicuna-30B-Uncensored.Q3_K_M.gguf](https://huggingface.co/TheBloke/Wizard-Vicuna-30B-Uncensored-GGUF/blob/main/Wizard-Vicuna-30B-Uncensored.Q3_K_M.gguf) | Q3_K_M | 3 | 15.76 GB| 18.26 GB | very small, high quality loss | | [Wizard-Vicuna-30B-Uncensored.Q3_K_L.gguf](https://huggingface.co/TheBloke/Wizard-Vicuna-30B-Uncensored-GGUF/blob/main/Wizard-Vicuna-30B-Uncensored.Q3_K_L.gguf) | Q3_K_L | 3 | 17.28 GB| 19.78 GB | small, substantial quality loss | | [Wizard-Vicuna-30B-Uncensored.Q4_0.gguf](https://huggingface.co/TheBloke/Wizard-Vicuna-30B-Uncensored-GGUF/blob/main/Wizard-Vicuna-30B-Uncensored.Q4_0.gguf) | Q4_0 | 4 | 18.36 GB| 20.86 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [Wizard-Vicuna-30B-Uncensored.Q4_K_S.gguf](https://huggingface.co/TheBloke/Wizard-Vicuna-30B-Uncensored-GGUF/blob/main/Wizard-Vicuna-30B-Uncensored.Q4_K_S.gguf) | Q4_K_S | 4 | 18.44 GB| 20.94 GB | small, greater quality loss | | [Wizard-Vicuna-30B-Uncensored.Q4_K_M.gguf](https://huggingface.co/TheBloke/Wizard-Vicuna-30B-Uncensored-GGUF/blob/main/Wizard-Vicuna-30B-Uncensored.Q4_K_M.gguf) | Q4_K_M | 4 | 19.62 GB| 22.12 GB | medium, balanced quality - recommended | | [Wizard-Vicuna-30B-Uncensored.Q5_0.gguf](https://huggingface.co/TheBloke/Wizard-Vicuna-30B-Uncensored-GGUF/blob/main/Wizard-Vicuna-30B-Uncensored.Q5_0.gguf) | Q5_0 | 5 | 22.40 GB| 24.90 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [Wizard-Vicuna-30B-Uncensored.Q5_K_S.gguf](https://huggingface.co/TheBloke/Wizard-Vicuna-30B-Uncensored-GGUF/blob/main/Wizard-Vicuna-30B-Uncensored.Q5_K_S.gguf) | Q5_K_S | 5 | 22.40 GB| 24.90 GB | large, low quality loss - recommended | | [Wizard-Vicuna-30B-Uncensored.Q5_K_M.gguf](https://huggingface.co/TheBloke/Wizard-Vicuna-30B-Uncensored-GGUF/blob/main/Wizard-Vicuna-30B-Uncensored.Q5_K_M.gguf) | Q5_K_M | 5 | 23.05 GB| 25.55 GB | large, very low quality loss - recommended | | [Wizard-Vicuna-30B-Uncensored.Q6_K.gguf](https://huggingface.co/TheBloke/Wizard-Vicuna-30B-Uncensored-GGUF/blob/main/Wizard-Vicuna-30B-Uncensored.Q6_K.gguf) | Q6_K | 6 | 26.69 GB| 29.19 GB | very large, extremely low quality loss | | [Wizard-Vicuna-30B-Uncensored.Q8_0.gguf](https://huggingface.co/TheBloke/Wizard-Vicuna-30B-Uncensored-GGUF/blob/main/Wizard-Vicuna-30B-Uncensored.Q8_0.gguf) | Q8_0 | 8 | 34.57 GB| 37.07 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Wizard-Vicuna-30B-Uncensored-GGUF and below it, a specific filename to download, such as: Wizard-Vicuna-30B-Uncensored.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Wizard-Vicuna-30B-Uncensored-GGUF Wizard-Vicuna-30B-Uncensored.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Wizard-Vicuna-30B-Uncensored-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Wizard-Vicuna-30B-Uncensored-GGUF Wizard-Vicuna-30B-Uncensored.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m Wizard-Vicuna-30B-Uncensored.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/Wizard-Vicuna-30B-Uncensored-GGUF", model_file="Wizard-Vicuna-30B-Uncensored.Q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Eric Hartford's Wizard-Vicuna-30B-Uncensored <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # Eric Hartford's Wizard-Vicuna-30B-Uncensored GPTQ This is an fp16 models of [Eric Hartford's Wizard-Vicuna 30B](https://huggingface.co/ehartford/Wizard-Vicuna-30B-Uncensored). It is the result of converting Eric's original fp32 upload to fp16. ## Repositories available * [4bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Wizard-Vicuna-30B-Uncensored-GPTQ). * [4bit and 5bit GGML models for CPU inference](https://huggingface.co/TheBloke/Wizard-Vicuna-30B-Uncensored-GGML). * [float16 HF format model for GPU inference and further conversions](https://huggingface.co/TheBloke/Wizard-Vicuna-30B-Uncensored-fp16). <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Patreon special mentions**: Aemon Algiz, Dmitriy Samsonov, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, Jonathan Leane, Talal Aujan, V. Lukas, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Sebastain Graf, Johann-Peter Hartman. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card This is [wizard-vicuna-13b](https://huggingface.co/junelee/wizard-vicuna-13b) trained with a subset of the dataset - responses that contained alignment / moralizing were removed. The intent is to train a WizardLM that doesn't have alignment built-in, so that alignment (of any sort) can be added separately with for example with a RLHF LoRA. Shout out to the open source AI/ML community, and everyone who helped me out. Note: An uncensored model has no guardrails. You are responsible for anything you do with the model, just as you are responsible for anything you do with any dangerous object such as a knife, gun, lighter, or car. Publishing anything this model generates is the same as publishing it yourself. You are responsible for the content you publish, and you cannot blame the model any more than you can blame the knife, gun, lighter, or car for what you do with it. <!-- original-model-card end -->
duyntnet/Mistral-7B-Holodeck-1-imatrix-GGUF
duyntnet
"2024-05-28T23:22:24Z"
31
0
transformers
[ "transformers", "gguf", "imatrix", "Mistral-7B-Holodeck-1", "text-generation", "en", "license:other", "region:us" ]
text-generation
"2024-05-28T20:27:40Z"
--- license: other language: - en pipeline_tag: text-generation inference: false tags: - transformers - gguf - imatrix - Mistral-7B-Holodeck-1 --- Quantizations of https://huggingface.co/KoboldAI/Mistral-7B-Holodeck-1 # From original readme Mistral 7B-Holodeck is a finetune created using Mistral's 7B model. ## Training data The training data contains around 3000 ebooks in various genres. Most parts of the dataset have been prepended using the following text: `[Genre: <genre1>, <genre2>]`
AvaniSharma/mistral_7b_guanaco
AvaniSharma
"2024-03-02T07:32:13Z"
4
0
peft
[ "peft", "safetensors", "dataset:mlabonne/guanaco-llama2-1k", "base_model:mistralai/Mistral-7B-v0.1", "base_model:adapter:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "region:us" ]
null
"2024-01-13T21:42:41Z"
--- library_name: peft base_model: mistralai/Mistral-7B-v0.1 license: apache-2.0 datasets: - mlabonne/guanaco-llama2-1k --- # Model Card for Model ID - This model is finetuned version of mistral 7B model (mistralai/Mistral-7B-v0.1). - I have finetuned mistral 7B on using instruction tuning guanacao llama2 1k training dataset (mlabonne/guanaco-llama2-1k). ## Model Details I have used Kaggle's model feature to load the base model and then have followed following steps to fine tune the model: - First I created quantization config to load based model in 4 bit precision to reduce the memory footprint using `BitsAndBytesConfig` and providing in quantization config when loading pretrained model - Thereafter I loaded the model using `AutoModelForCausalLM.from_pretrained` - We also get tokenizer from pretrained base model using `AutoTokenizer.from_pretrained` and adjust it to fp16. - LORA Config - I used PEFT technique QLORA to create Low Rank Adptation Config for adding an adapter layer for fine tuning. - Using LORA we add small rank weight matrices whose parameters are modified while LLM's parameters are frozen. After finetuning is over we combine weights of these low rank matrices with LLMs weights to obtain new fine tuned weights. This makes fine tuning process faster and memory efficient - We train SFT (Supervised Fine-Tuning) trainer using LORA parameters and training hyperparameters listed under *Training Hyperparameters* section to finetune the base model - **Developed by:** Avani Sharma - **Model type:** LLM - **Finetuned from model [optional]:** mistralai/Mistral-7B-v0.1 ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** https://github.com/Avani1994/NLP/blob/99dd33484bdf06261fd872f24b939977b55bdceb/Mistral_7B_4bit_QLoRA_Fine_tuning_Explained.ipynb #### Training Hyperparameters I used following params for LORA params: ``` lora_alpha=16, lora_dropout=0.1, r=64, ``` And following Hyperparameters for training ``` num_train_epochs=1 optim="paged_adamw_32bit", save_steps=25, logging_steps=25, per_device_train_batch_size=4 gradient_accumulation_steps=1 learning_rate=2e-4, weight_decay=0.001, lr_scheduler_type="constant", fp16=False, bf16=False, max_grad_norm=0.3, max_steps=-1, warmup_ratio=0.03, group_by_length=True, report_to="wandb" ``` ### Compute Infrastructure Kaggle #### Hardware Kaggle GPU T4x2 #### Software Kaggle Notebook ### Framework versions - PEFT 0.7.1
unsloth/Qwen2.5-14B-Instruct-1M-unsloth-bnb-4bit
unsloth
"2025-02-02T04:12:52Z"
372
2
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "unsloth", "qwen", "conversational", "en", "arxiv:2412.15115", "base_model:Qwen/Qwen2.5-14B-Instruct-1M", "base_model:quantized:Qwen/Qwen2.5-14B-Instruct-1M", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
"2025-01-27T08:38:25Z"
--- base_model: Qwen/Qwen2.5-14B-Instruct-1M language: - en library_name: transformers license: apache-2.0 tags: - unsloth - transformers - qwen - qwen2 --- # Finetune Llama 3.3, Qwen, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth! We have a Qwen 2.5 (all model sizes) [free Google Colab Tesla T4 notebook](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen2.5_(7B)-Alpaca.ipynb) [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth) [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) ## ✨ Finetune for Free All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face. | Unsloth supports | Free Notebooks | Performance | Memory use | |-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------| | **Llama-3.2 (3B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(1B_and_3B)-Conversational.ipynb) | 2.4x faster | 58% less | | **Llama-3.2 (11B vision)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(11B)-Vision.ipynb) | 2x faster | 60% less | | **Qwen2 VL (7B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen2_VL_(7B)-Vision.ipynb) | 1.8x faster | 60% less | | **Qwen2.5 (7B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen2.5_(7B)-Alpaca.ipynb) | 2x faster | 60% less | | **Llama-3.1 (8B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.1_(8B)-Alpaca.ipynb) | 2.4x faster | 58% less | | **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Phi_3.5_Mini-Conversational.ipynb) | 2x faster | 50% less | | **Gemma 2 (9B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Gemma2_(9B)-Alpaca.ipynb) | 2.4x faster | 58% less | | **Mistral (7B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Mistral_v0.3_(7B)-Conversational.ipynb) | 2.2x faster | 62% less | [<img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="200"/>](https://docs.unsloth.ai) - This [Llama 3.2 conversational notebook](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(1B_and_3B)-Conversational.ipynb) is useful for ShareGPT ChatML / Vicuna templates. - This [text completion notebook](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Mistral_(7B)-Text_Completion.ipynb) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr. - \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster. # Qwen2.5-14B-Instruct-1M <a href="https://chat.qwenlm.ai/" target="_blank" style="margin: 2px;"> <img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/> </a> ## Introduction Qwen2.5-1M is the long-context version of the Qwen2.5 series models, supporting a context length of up to 1M tokens. Compared to the Qwen2.5 128K version, Qwen2.5-1M demonstrates significantly improved performance in handling long-context tasks while maintaining its capability in short tasks. The model has the following features: - Type: Causal Language Models - Training Stage: Pretraining & Post-training - Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias - Number of Parameters: 14.7B - Number of Paramaters (Non-Embedding): 13.1B - Number of Layers: 48 - Number of Attention Heads (GQA): 40 for Q and 8 for KV - Context Length: Full 1,010,000 tokens and generation 8192 tokens - We recommend deploying with our custom vLLM, which introduces sparse attention and length extrapolation methods to ensure efficiency and accuracy for long-context tasks. For specific guidance, refer to [this section](#processing-ultra-long-texts). - You can also use the previous framework that supports Qwen2.5 for inference, but accuracy degradation may occur for sequences exceeding 262,144 tokens. For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5-1m/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/). ## Requirements The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`. With `transformers<4.37.0`, you will encounter the following error: ``` KeyError: 'qwen2' ``` ## Quickstart Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "Qwen/Qwen2.5-14B-Instruct" model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained(model_name) prompt = "Give me a short introduction to large language model." messages = [ {"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate( **model_inputs, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] ``` ### Processing Ultra Long Texts To enhance processing accuracy and efficiency for long sequences, we have developed an advanced inference framework based on vLLM, incorporating sparse attention and length extrapolation. This approach significantly improves model generation performance for sequences exceeding 256K tokens and achieves a 3 to 7 times speedup for sequences up to 1M tokens. Here we provide step-by-step instructions for deploying the Qwen2.5-1M models with our framework. #### 1. System Preparation To achieve the best performance, we recommend using GPUs with Ampere or Hopper architecture, which support optimized kernels. Ensure your system meets the following requirements: - **CUDA Version**: 12.1 or 12.3 - **Python Version**: >=3.9 and <=3.12 **VRAM Requirements:** - For processing 1 million-token sequences: - **Qwen2.5-7B-Instruct-1M**: At least 120GB VRAM (total across GPUs). - **Qwen2.5-14B-Instruct-1M**: At least 320GB VRAM (total across GPUs). If your GPUs do not have sufficient VRAM, you can still use Qwen2.5-1M for shorter tasks. #### 2. Install Dependencies For now, you need to clone the vLLM repository from our custom branch and install it manually. We are working on getting our branch merged into the main vLLM project. ```bash git clone -b dev/dual-chunk-attn [email protected]:QwenLM/vllm.git cd vllm pip install -e . -v ``` #### 3. Launch vLLM vLLM supports offline inference or launch an openai-like server. **Example of Offline Inference** ```python from transformers import AutoTokenizer from vllm import LLM, SamplingParams # Initialize the tokenizer tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-14B-Instruct-1M") # Pass the default decoding hyperparameters of Qwen2.5-14B-Instruct # max_tokens is for the maximum length for generation. sampling_params = SamplingParams(temperature=0.7, top_p=0.8, repetition_penalty=1.05, max_tokens=512) # Input the model name or path. See below for parameter explanation (after the example of openai-like server). llm = LLM(model="Qwen/Qwen2.5-14B-Instruct-1M", tensor_parallel_size=4, max_model_len=1010000, enable_chunked_prefill=True, max_num_batched_tokens=131072, enforce_eager=True, # quantization="fp8", # Enabling FP8 quantization for model weights can reduce memory usage. ) # Prepare your prompts prompt = "Tell me something about large language models." messages = [ {"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) # generate outputs outputs = llm.generate([text], sampling_params) # Print the outputs. for output in outputs: prompt = output.prompt generated_text = output.outputs[0].text print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}") ``` **Example of Openai-like Server** ```bash vllm serve Qwen/Qwen2.5-14B-Instruct-1M \ --tensor-parallel-size 4 \ --max-model-len 1010000 \ --enable-chunked-prefill --max-num-batched-tokens 131072 \ --enforce-eager \ --max-num-seqs 1 # --quantization fp8 # Enabling FP8 quantization for model weights can reduce memory usage. ``` Then you can use curl or python to interact with the deployed model. **Parameter Explanations:** - **`--tensor-parallel-size`** - Set to the number of GPUs you are using. Max 4 GPUs for the 7B model, and 8 GPUs for the 14B model. - **`--max-model-len`** - Defines the maximum input sequence length. Reduce this value if you encounter Out of Memory issues. - **`--max-num-batched-tokens`** - Sets the chunk size in Chunked Prefill. A smaller value reduces activation memory usage but may slow down inference. - Recommend 131072 for optimal performance. - **`--max-num-seqs`** - Limits concurrent sequences processed. You can also refer to our [Documentation](https://qwen.readthedocs.io/en/latest/deployment/vllm.html) for usage of vLLM. #### Troubleshooting: 1. Encountering the error: "The model's max sequence length (xxxxx) is larger than the maximum number of tokens that can be stored in the KV cache." The VRAM reserved for the KV cache is insufficient. Consider reducing the ``max_model_len`` or increasing the ``tensor_parallel_size``. Alternatively, you can reduce ``max_num_batched_tokens``, although this may significantly slow down inference. 2. Encountering the error: "torch.OutOfMemoryError: CUDA out of memory." The VRAM reserved for activation weights is insufficient. You can try setting ``gpu_memory_utilization`` to 0.85 or lower, but be aware that this might reduce the VRAM available for the KV cache. 3. Encountering the error: "Input prompt (xxxxx tokens) + lookahead slots (0) is too long and exceeds the capacity of the block manager." The input is too lengthy. Consider using a shorter sequence or increasing the ``max_model_len``. ## Evaluation & Performance Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5-1m/) and our [technical report](https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-1M/Qwen2_5_1M_Technical_Report.pdf). ## Citation If you find our work helpful, feel free to give us a cite. ``` @misc{qwen2.5-1m, title = {Qwen2.5-1M: Deploy Your Own Qwen with Context Length up to 1M Tokens}, url = {https://qwenlm.github.io/blog/qwen2.5-1m/}, author = {Qwen Team}, month = {January}, year = {2025} } @article{qwen2.5, title={Qwen2.5 Technical Report}, author={An Yang and Baosong Yang and Beichen Zhang and Binyuan Hui and Bo Zheng and Bowen Yu and Chengyuan Li and Dayiheng Liu and Fei Huang and Haoran Wei and Huan Lin and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Yang and Jiaxi Yang and Jingren Zhou and Junyang Lin and Kai Dang and Keming Lu and Keqin Bao and Kexin Yang and Le Yu and Mei Li and Mingfeng Xue and Pei Zhang and Qin Zhu and Rui Men and Runji Lin and Tianhao Li and Tianyi Tang and Tingyu Xia and Xingzhang Ren and Xuancheng Ren and Yang Fan and Yang Su and Yichang Zhang and Yu Wan and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zihan Qiu}, journal={arXiv preprint arXiv:2412.15115}, year={2024} } ```
Qwen/QwQ-32B-AWQ
Qwen
"2025-03-11T12:16:21Z"
56,773
76
null
[ "safetensors", "qwen2", "chat", "text-generation", "conversational", "en", "arxiv:2309.00071", "arxiv:2412.15115", "base_model:Qwen/QwQ-32B", "base_model:quantized:Qwen/QwQ-32B", "license:apache-2.0", "4-bit", "awq", "region:us" ]
text-generation
"2025-03-05T14:00:05Z"
--- license: apache-2.0 license_link: https://huggingface.co/Qwen/QWQ-32B-AWQ/blob/main/LICENSE language: - en pipeline_tag: text-generation base_model: Qwen/QwQ-32B tags: - chat --- # QwQ-32B-AWQ <a href="https://chat.qwenlm.ai/" target="_blank" style="margin: 2px;"> <img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/> </a> ## Introduction QwQ is the reasoning model of the Qwen series. Compared with conventional instruction-tuned models, QwQ, which is capable of thinking and reasoning, can achieve significantly enhanced performance in downstream tasks, especially hard problems. QwQ-32B is the medium-sized reasoning model, which is capable of achieving competitive performance against state-of-the-art reasoning models, e.g., DeepSeek-R1, o1-mini. <p align="center"> <img width="100%" src="figures/benchmark.jpg"> </p> **This repo contains the AWQ-quantized 4-bit QwQ 32B model**, which has the following features: - Type: Causal Language Models - Training Stage: Pretraining & Post-training (Supervised Finetuning and Reinforcement Learning) - Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias - Number of Parameters: 32.5B - Number of Paramaters (Non-Embedding): 31.0B - Number of Layers: 64 - Number of Attention Heads (GQA): 40 for Q and 8 for KV - Context Length: Full 131,072 tokens - For prompts exceeding 8,192 tokens in length, you must enable YaRN as outlined in [this section](#usage-guidelines). - Quantization: AWQ 4-bit **Note:** For the best experience, please review the [usage guidelines](#usage-guidelines) before deploying QwQ models. You can try our [demo](https://huggingface.co/spaces/Qwen/QwQ-32B-Demo) or access QwQ models via [QwenChat](https://chat.qwen.ai). For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwq-32b/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/). ## Requirements QwQ is based on Qwen2.5, whose code has been in the latest Hugging face `transformers`. We advise you to use the latest version of `transformers`. With `transformers<4.37.0`, you will encounter the following error: ``` KeyError: 'qwen2' ``` Also check out our [AWQ documentation](https://qwen.readthedocs.io/en/latest/quantization/awq.html) for more usage guide. ## Quickstart Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "Qwen/QwQ-32B-AWQ" model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained(model_name) prompt = "How many r's are in the word \"strawberry\"" messages = [ {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate( **model_inputs, max_new_tokens=32768 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] print(response) ``` ### Usage Guidelines To achieve optimal performance, we recommend the following settings: 1. **Enforce Thoughtful Output**: Ensure the model starts with "\<think\>\n" to prevent generating empty thinking content, which can degrade output quality. If you use `apply_chat_template` and set `add_generation_prompt=True`, this is already automatically implemented, but it may cause the response to lack the \<think\> tag at the beginning. This is normal behavior. 2. **Sampling Parameters**: - Use Temperature=0.6, TopP=0.95, MinP=0 instead of Greedy decoding to avoid endless repetitions. - Use TopK between 20 and 40 to filter out rare token occurrences while maintaining the diversity of the generated output. - For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may result in occasional language mixing and a slight decrease in performance. 3. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. This feature is already implemented in `apply_chat_template`. 4. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking. - **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt. - **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g.,`\"answer\": \"C\"`." in the prompt. 5. **Handle Long Inputs**: For inputs exceeding 8,192 tokens, enable [YaRN](https://arxiv.org/abs/2309.00071) to improve the model's ability to capture long-sequence information effectively. For supported frameworks, you could add the following to `config.json` to enable YaRN: ```json { ..., "rope_scaling": { "factor": 4.0, "original_max_position_embeddings": 32768, "type": "yarn" } } ``` For deployment, we recommend using vLLM. Please refer to our [Documentation](https://qwen.readthedocs.io/en/latest/deployment/vllm.html) for usage if you are not familar with vLLM. Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts**. We advise adding the `rope_scaling` configuration only when processing long contexts is required. ## Evaluation & Performance Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwq-32b/). For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html). ## Citation If you find our work helpful, feel free to give us a cite. ``` @misc{qwq32b, title = {QwQ-32B: Embracing the Power of Reinforcement Learning}, url = {https://qwenlm.github.io/blog/qwq-32b/}, author = {Qwen Team}, month = {March}, year = {2025} } @article{qwen2.5, title={Qwen2.5 Technical Report}, author={An Yang and Baosong Yang and Beichen Zhang and Binyuan Hui and Bo Zheng and Bowen Yu and Chengyuan Li and Dayiheng Liu and Fei Huang and Haoran Wei and Huan Lin and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Yang and Jiaxi Yang and Jingren Zhou and Junyang Lin and Kai Dang and Keming Lu and Keqin Bao and Kexin Yang and Le Yu and Mei Li and Mingfeng Xue and Pei Zhang and Qin Zhu and Rui Men and Runji Lin and Tianhao Li and Tianyi Tang and Tingyu Xia and Xingzhang Ren and Xuancheng Ren and Yang Fan and Yang Su and Yichang Zhang and Yu Wan and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zihan Qiu}, journal={arXiv preprint arXiv:2412.15115}, year={2024} } ```
zstang/distilbert-base-uncased-finetuned-imdb
zstang
"2025-01-06T04:30:38Z"
367
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "fill-mask", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2025-01-06T04:05:41Z"
--- library_name: transformers license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.4878 - Model Preparation Time: 0.0019 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | |:-------------:|:-----:|:----:|:---------------:|:----------------------:| | 2.6834 | 1.0 | 157 | 2.4964 | 0.0019 | | 2.5833 | 2.0 | 314 | 2.4492 | 0.0019 | | 2.5272 | 3.0 | 471 | 2.4812 | 0.0019 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
redutskaya/Olya-la
redutskaya
"2023-09-25T14:43:19Z"
0
0
null
[ "art", "text-generation-inference", "license:openrail", "region:us" ]
null
"2023-09-25T14:39:54Z"
--- license: openrail tags: - art - text-generation-inference ---
NasimB/children-rarity-all-guten-log-rarity-all
NasimB
"2023-07-16T04:21:14Z"
9
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2023-07-16T02:19:49Z"
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: children-rarity-all-guten-log-rarity-all results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # children-rarity-all-guten-log-rarity-all This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 4.3116 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 6.7036 | 0.29 | 500 | 5.6365 | | 5.348 | 0.58 | 1000 | 5.2064 | | 4.99 | 0.87 | 1500 | 4.9589 | | 4.7208 | 1.16 | 2000 | 4.8071 | | 4.5602 | 1.46 | 2500 | 4.6761 | | 4.4513 | 1.75 | 3000 | 4.5690 | | 4.3332 | 2.04 | 3500 | 4.4907 | | 4.1308 | 2.33 | 4000 | 4.4479 | | 4.1002 | 2.62 | 4500 | 4.3912 | | 4.0711 | 2.91 | 5000 | 4.3370 | | 3.8621 | 3.2 | 5500 | 4.3334 | | 3.803 | 3.49 | 6000 | 4.3002 | | 3.7865 | 3.79 | 6500 | 4.2683 | | 3.6992 | 4.08 | 7000 | 4.2633 | | 3.5158 | 4.37 | 7500 | 4.2591 | | 3.5163 | 4.66 | 8000 | 4.2433 | | 3.501 | 4.95 | 8500 | 4.2300 | | 3.3525 | 5.24 | 9000 | 4.2437 | | 3.3213 | 5.53 | 9500 | 4.2424 | | 3.3235 | 5.82 | 10000 | 4.2416 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
sergioalves/df534303-069b-4a00-9398-da4e0f50192a
sergioalves
"2025-01-21T05:49:09Z"
8
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:adapter:unsloth/Qwen2.5-0.5B-Instruct", "license:apache-2.0", "region:us" ]
null
"2025-01-21T05:35:06Z"
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2.5-0.5B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: df534303-069b-4a00-9398-da4e0f50192a results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Qwen2.5-0.5B-Instruct bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 54a6ac8a9aeace5b_train_data.json ds_type: json format: custom path: /workspace/input_data/54a6ac8a9aeace5b_train_data.json type: field_input: paper_title field_instruction: invitation field_output: content format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device: cuda early_stopping_patience: 1 eval_max_new_tokens: 128 eval_steps: 5 eval_table_size: null evals_per_epoch: null flash_attention: false fp16: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: false hub_model_id: sergioalves/df534303-069b-4a00-9398-da4e0f50192a hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 3 lora_alpha: 32 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 16 lora_target_linear: true lr_scheduler: cosine max_memory: 0: 75GiB max_steps: 30 micro_batch_size: 2 mlflow_experiment_name: /tmp/54a6ac8a9aeace5b_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_hf output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 10 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: true trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: ad1c4a22-2c67-4534-befc-5d9f6cf39943 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: ad1c4a22-2c67-4534-befc-5d9f6cf39943 warmup_steps: 10 weight_decay: 0.01 xformers_attention: true ``` </details><br> # df534303-069b-4a00-9398-da4e0f50192a This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_HF with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0001 | 1 | nan | | 0.0 | 0.0004 | 5 | nan | | 0.0 | 0.0009 | 10 | nan | | 0.0 | 0.0013 | 15 | nan | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Romain-XV/9ec9e464-cc65-4e4d-a3e7-ddc402bb902a
Romain-XV
"2025-02-07T06:07:47Z"
7
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:HuggingFaceM4/tiny-random-LlamaForCausalLM", "base_model:adapter:HuggingFaceM4/tiny-random-LlamaForCausalLM", "region:us" ]
null
"2025-02-07T06:07:10Z"
--- library_name: peft base_model: HuggingFaceM4/tiny-random-LlamaForCausalLM tags: - axolotl - generated_from_trainer model-index: - name: 9ec9e464-cc65-4e4d-a3e7-ddc402bb902a results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: HuggingFaceM4/tiny-random-LlamaForCausalLM bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 8ff2d75d54d3261c_train_data.json ds_type: json format: custom path: /workspace/input_data/8ff2d75d54d3261c_train_data.json type: field_input: input field_instruction: instruction field_output: output format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: 2 eval_max_new_tokens: 128 eval_steps: 100 eval_table_size: null flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 16 gradient_checkpointing: true group_by_length: false hub_model_id: Romain-XV/9ec9e464-cc65-4e4d-a3e7-ddc402bb902a hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_best_model_at_end: true load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 32 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 16 lora_target_linear: true lora_target_modules: - q_proj - k_proj - v_proj lr_scheduler: cosine max_steps: 38 micro_batch_size: 4 mlflow_experiment_name: /tmp/8ff2d75d54d3261c_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 100 sequence_len: 2048 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: e8ab85b2-902c-4d1d-8404-72583206523f wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: e8ab85b2-902c-4d1d-8404-72583206523f warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 9ec9e464-cc65-4e4d-a3e7-ddc402bb902a This model is a fine-tuned version of [HuggingFaceM4/tiny-random-LlamaForCausalLM](https://huggingface.co/HuggingFaceM4/tiny-random-LlamaForCausalLM) on the None dataset. It achieves the following results on the evaluation set: - Loss: 10.3784 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 38 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 10.376 | 0.0027 | 1 | 10.3784 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
hus960/Mini-Mixtral-v0.2-Q4_K_M-GGUF
hus960
"2024-04-27T23:23:46Z"
4
0
null
[ "gguf", "moe", "frankenmoe", "merge", "mergekit", "lazymergekit", "unsloth/mistral-7b-v0.2", "mistralai/Mistral-7B-Instruct-v0.2", "llama-cpp", "gguf-my-repo", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "base_model:merge:mistralai/Mistral-7B-Instruct-v0.2", "base_model:unsloth/mistral-7b-v0.2", "base_model:merge:unsloth/mistral-7b-v0.2", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-04-27T20:40:39Z"
--- license: apache-2.0 tags: - moe - frankenmoe - merge - mergekit - lazymergekit - unsloth/mistral-7b-v0.2 - mistralai/Mistral-7B-Instruct-v0.2 - llama-cpp - gguf-my-repo base_model: - unsloth/mistral-7b-v0.2 - mistralai/Mistral-7B-Instruct-v0.2 --- # hus960/Mini-Mixtral-v0.2-Q4_K_M-GGUF This model was converted to GGUF format from [`NeuralNovel/Mini-Mixtral-v0.2`](https://huggingface.co/NeuralNovel/Mini-Mixtral-v0.2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/NeuralNovel/Mini-Mixtral-v0.2) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo hus960/Mini-Mixtral-v0.2-Q4_K_M-GGUF --model mini-mixtral-v0.2.Q4_K_M.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo hus960/Mini-Mixtral-v0.2-Q4_K_M-GGUF --model mini-mixtral-v0.2.Q4_K_M.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m mini-mixtral-v0.2.Q4_K_M.gguf -n 128 ```
RichardErkhov/MrezaPRZ_-_CodeLlama-7B-postgres-expert-4bits
RichardErkhov
"2025-02-26T05:03:41Z"
0
0
null
[ "safetensors", "llama", "arxiv:1910.09700", "4-bit", "bitsandbytes", "region:us" ]
null
"2025-02-26T05:01:09Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) CodeLlama-7B-postgres-expert - bnb 4bits - Model creator: https://huggingface.co/MrezaPRZ/ - Original model: https://huggingface.co/MrezaPRZ/CodeLlama-7B-postgres-expert/ Original model description: --- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
levovix/ykgrywbg
levovix
"2023-02-25T09:19:08Z"
31
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2023-02-24T12:18:59Z"
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### ykgrywbg Dreambooth model trained by levovix with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Dreambooth `YanKaGor - You Will Be Gone` from Anything-v3.0 Concept name - `ykgrywbg` Sample pictures of this concept: ![0](https://huggingface.co/levovix/ykgrywbg/resolve/main/sample_images/12.png) ![1](https://huggingface.co/levovix/ykgrywbg/resolve/main/sample_images/7.png) ![2](https://huggingface.co/levovix/ykgrywbg/resolve/main/sample_images/11.png) ![3](https://huggingface.co/levovix/ykgrywbg/resolve/main/sample_images/17.png) ![4](https://huggingface.co/levovix/ykgrywbg/resolve/main/sample_images/9.png) ![5](https://huggingface.co/levovix/ykgrywbg/resolve/main/sample_images/15.png)
owanr/SChem5Labels-roberta-base-inter-shuffle-human_annots_alpha0.5_whole_1e-05
owanr
"2023-12-16T19:47:28Z"
0
0
null
[ "pytorch", "safetensors", "generated_from_trainer", "base_model:FacebookAI/roberta-base", "base_model:finetune:FacebookAI/roberta-base", "license:mit", "region:us" ]
null
"2023-12-16T19:47:11Z"
--- license: mit base_model: roberta-base tags: - generated_from_trainer model-index: - name: SChem5Labels-roberta-base-inter-shuffle-human_annots_alpha0.5_whole_1e-05 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SChem5Labels-roberta-base-inter-shuffle-human_annots_alpha0.5_whole_1e-05 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.8905 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.0228 | 1.0 | 3164 | 2.9856 | | 2.743 | 2.0 | 6328 | 2.8905 | | 2.6452 | 3.0 | 9492 | 2.8905 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0
pengguilan/Llama2_7b_lora_M0
pengguilan
"2025-04-13T12:45:20Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2025-04-13T12:44:21Z"
<!DOCTYPE html> <html class="" lang="en"> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no" /> <meta name="description" content="We're on a journey to advance and democratize artificial intelligence through open source and open science." /> <meta property="fb:app_id" content="1321688464574422" /> <meta name="twitter:card" content="summary_large_image" /> <meta name="twitter:site" content="@huggingface" /> <meta property="og:title" content="Hugging Face - The AI community building the future." /> <meta property="og:type" content="website" /> <title>Hugging Face - The AI community building the future.</title> <style> body { margin: 0; } main { background-color: white; min-height: 100vh; padding: 7rem 1rem 8rem 1rem; text-align: center; font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system, BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans, sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol, Noto Color Emoji; } img { width: 6rem; height: 6rem; margin: 0 auto 1rem; } h1 { font-size: 3.75rem; line-height: 1; color: rgba(31, 41, 55, 1); font-weight: 700; box-sizing: border-box; margin: 0 auto; } p, a { color: rgba(107, 114, 128, 1); font-size: 1.125rem; line-height: 1.75rem; max-width: 28rem; box-sizing: border-box; margin: 0 auto; } .dark main { background-color: rgb(11, 15, 25); } .dark h1 { color: rgb(209, 213, 219); } .dark p, .dark a { color: rgb(156, 163, 175); } </style> <script> // On page load or when changing themes, best to add inline in `head` to avoid FOUC const key = "_tb_global_settings"; let theme = window.matchMedia("(prefers-color-scheme: dark)").matches ? "dark" : "light"; try { const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme; if (storageTheme) { theme = storageTheme === "dark" ? "dark" : "light"; } } catch (e) {} if (theme === "dark") { document.documentElement.classList.add("dark"); } else { document.documentElement.classList.remove("dark"); } </script> </head> <body> <main> <img src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg" alt="" /> <div> <h1>429</h1> <p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p> </div> </main> </body> </html>
LarryAIDraw/arkAngelina_XL-Pony_LoRA-C3Lier_16-16-8-8_AdamW_Un3e-4_Te1_5e-4_10batch
LarryAIDraw
"2024-06-15T16:34:40Z"
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
"2024-06-15T16:33:10Z"
--- license: creativeml-openrail-m --- https://civitai.com/models/500945/request-angelina-distinguished-visitor-arknights-sdxl-pony-diffusion
leenag/Mal_ASR_Whisper_small_imasc_1000
leenag
"2023-11-09T04:26:00Z"
4
3
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2023-11-08T12:18:25Z"
--- license: apache-2.0 base_model: openai/whisper-small tags: - generated_from_trainer metrics: - wer model-index: - name: Mal_ASR_Whisper_small_imasc_1000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Mal_ASR_Whisper_small_imasc_1000 This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0642 - Wer: 52.2853 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 2000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3098 | 0.74 | 200 | 0.2613 | 200.6810 | | 0.1009 | 1.48 | 400 | 0.0988 | 54.5952 | | 0.0559 | 2.22 | 600 | 0.0722 | 44.6184 | | 0.0518 | 2.96 | 800 | 0.0608 | 39.1631 | | 0.0285 | 3.7 | 1000 | 0.0573 | 46.0858 | | 0.0166 | 4.44 | 1200 | 0.0567 | 46.7036 | | 0.0082 | 5.19 | 1400 | 0.0589 | 50.9513 | | 0.0075 | 5.93 | 1600 | 0.0590 | 65.6252 | | 0.0031 | 6.67 | 1800 | 0.0629 | 57.2913 | | 0.0018 | 7.41 | 2000 | 0.0642 | 52.2853 | ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.14.0
IntelLabs/sqft-mistral-7b-v0.3-30-base
IntelLabs
"2025-02-12T17:04:17Z"
11
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "en", "arxiv:2410.03750", "arxiv:2501.16372", "arxiv:2306.11695", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-10-30T06:01:23Z"
--- language: en license: apache-2.0 library_name: transformers --- # SQFT Base Model: sqft-mistral-7b-v0.3-30-base - Source Model: [mistralai/Mistral-7B-v0.3](https://huggingface.co/mistralai/Mistral-7B-v0.3) - Sparse Method: [Wanda](https://github.com/locuslab/wanda) - Sparsity: 30% - Quantization: No ## Model Sources **Repository:** [https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning/tree/main/SQFT](https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning/tree/main/SQFT) **Paper:** - [SQFT: Low-cost Model Adaptation in Low-precision Sparse Foundation Models](https://arxiv.org/abs/2410.03750) - [Low-Rank Adapters Meet Neural Architecture Search for LLM Compression](https://arxiv.org/abs/2501.16372) ## How to get this model Refer to the command in [SQFT/run_command/mistral-7b-v0.3/sparse_quantization.sh#11](https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning/tree/main/SQFT/legacy/run_command/mistral-7b-v0.3/sparse_quantization.sh#11). ## Citation ```bash @inproceedings{munoz-etal-2024-sqft, title = "{SQFT}: Low-cost Model Adaptation in Low-precision Sparse Foundation Models", author = "Munoz, Juan Pablo and Yuan, Jinjie and Jain, Nilesh", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2024", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-emnlp.749", pages = "12817--12832", } ``` ## Acknowledgement Thanks to the work Wanda ([paper](https://arxiv.org/abs/2306.11695), [code](https://github.com/locuslab/wanda)), which provides a simple but effective pruning approach. ## License Apache-2.0
LarryAIDraw/chara_IsekaiMaou_SheraLGreenwood_v1
LarryAIDraw
"2023-10-12T18:37:07Z"
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
"2023-10-12T18:31:17Z"
--- license: creativeml-openrail-m --- https://civitai.com/models/159176/shera-l-greenwood-or-isekai-maou-to-shoukan-shoujo-no-dorei-majutsu
CyberHarem/anastasia_idolmastercinderellagirls
CyberHarem
"2023-09-16T00:33:35Z"
0
0
null
[ "art", "text-to-image", "dataset:CyberHarem/anastasia_idolmastercinderellagirls", "license:mit", "region:us" ]
text-to-image
"2023-09-16T00:19:45Z"
--- license: mit datasets: - CyberHarem/anastasia_idolmastercinderellagirls pipeline_tag: text-to-image tags: - art --- # Lora of anastasia_idolmastercinderellagirls This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs). The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11). After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora. For example, if you want to use the model from step 6240, you need to download `6240/anastasia_idolmastercinderellagirls.pt` as the embedding and `6240/anastasia_idolmastercinderellagirls.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters. **The best step we recommend is 6240**, with the score of 0.934. The trigger words are: 1. `anastasia_idolmastercinderellagirls` 2. `short_hair, blue_eyes, grey_hair, smile, jewelry, breasts, hair_between_eyes, medium_breasts` For the following groups, it is not recommended to use this model and we express regret: 1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail. 2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits. 3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm. 4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters. 5. Individuals who finds the generated image content offensive to their values. These are available steps: | Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata | |:---------|:----------|:-------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:----------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------| | 7200 | 0.925 | [Download](7200/anastasia_idolmastercinderellagirls.zip) | ![pattern_1-7200](7200/previews/pattern_1.png) | ![pattern_2-7200](7200/previews/pattern_2.png) | ![pattern_3-7200](7200/previews/pattern_3.png) | ![pattern_4-7200](7200/previews/pattern_4.png) | ![pattern_5-7200](7200/previews/pattern_5.png) | [<NSFW, click to see>](7200/previews/pattern_6.png) | ![bikini-7200](7200/previews/bikini.png) | [<NSFW, click to see>](7200/previews/bondage.png) | ![free-7200](7200/previews/free.png) | ![maid-7200](7200/previews/maid.png) | ![miko-7200](7200/previews/miko.png) | [<NSFW, click to see>](7200/previews/nude.png) | [<NSFW, click to see>](7200/previews/nude2.png) | ![suit-7200](7200/previews/suit.png) | ![yukata-7200](7200/previews/yukata.png) | | 6720 | 0.921 | [Download](6720/anastasia_idolmastercinderellagirls.zip) | ![pattern_1-6720](6720/previews/pattern_1.png) | ![pattern_2-6720](6720/previews/pattern_2.png) | ![pattern_3-6720](6720/previews/pattern_3.png) | ![pattern_4-6720](6720/previews/pattern_4.png) | ![pattern_5-6720](6720/previews/pattern_5.png) | [<NSFW, click to see>](6720/previews/pattern_6.png) | ![bikini-6720](6720/previews/bikini.png) | [<NSFW, click to see>](6720/previews/bondage.png) | ![free-6720](6720/previews/free.png) | ![maid-6720](6720/previews/maid.png) | ![miko-6720](6720/previews/miko.png) | [<NSFW, click to see>](6720/previews/nude.png) | [<NSFW, click to see>](6720/previews/nude2.png) | ![suit-6720](6720/previews/suit.png) | ![yukata-6720](6720/previews/yukata.png) | | **6240** | **0.934** | [**Download**](6240/anastasia_idolmastercinderellagirls.zip) | ![pattern_1-6240](6240/previews/pattern_1.png) | ![pattern_2-6240](6240/previews/pattern_2.png) | ![pattern_3-6240](6240/previews/pattern_3.png) | ![pattern_4-6240](6240/previews/pattern_4.png) | ![pattern_5-6240](6240/previews/pattern_5.png) | [<NSFW, click to see>](6240/previews/pattern_6.png) | ![bikini-6240](6240/previews/bikini.png) | [<NSFW, click to see>](6240/previews/bondage.png) | ![free-6240](6240/previews/free.png) | ![maid-6240](6240/previews/maid.png) | ![miko-6240](6240/previews/miko.png) | [<NSFW, click to see>](6240/previews/nude.png) | [<NSFW, click to see>](6240/previews/nude2.png) | ![suit-6240](6240/previews/suit.png) | ![yukata-6240](6240/previews/yukata.png) | | 5760 | 0.858 | [Download](5760/anastasia_idolmastercinderellagirls.zip) | ![pattern_1-5760](5760/previews/pattern_1.png) | ![pattern_2-5760](5760/previews/pattern_2.png) | ![pattern_3-5760](5760/previews/pattern_3.png) | ![pattern_4-5760](5760/previews/pattern_4.png) | ![pattern_5-5760](5760/previews/pattern_5.png) | [<NSFW, click to see>](5760/previews/pattern_6.png) | ![bikini-5760](5760/previews/bikini.png) | [<NSFW, click to see>](5760/previews/bondage.png) | ![free-5760](5760/previews/free.png) | ![maid-5760](5760/previews/maid.png) | ![miko-5760](5760/previews/miko.png) | [<NSFW, click to see>](5760/previews/nude.png) | [<NSFW, click to see>](5760/previews/nude2.png) | ![suit-5760](5760/previews/suit.png) | ![yukata-5760](5760/previews/yukata.png) | | 5280 | 0.892 | [Download](5280/anastasia_idolmastercinderellagirls.zip) | ![pattern_1-5280](5280/previews/pattern_1.png) | ![pattern_2-5280](5280/previews/pattern_2.png) | ![pattern_3-5280](5280/previews/pattern_3.png) | ![pattern_4-5280](5280/previews/pattern_4.png) | ![pattern_5-5280](5280/previews/pattern_5.png) | [<NSFW, click to see>](5280/previews/pattern_6.png) | ![bikini-5280](5280/previews/bikini.png) | [<NSFW, click to see>](5280/previews/bondage.png) | ![free-5280](5280/previews/free.png) | ![maid-5280](5280/previews/maid.png) | ![miko-5280](5280/previews/miko.png) | [<NSFW, click to see>](5280/previews/nude.png) | [<NSFW, click to see>](5280/previews/nude2.png) | ![suit-5280](5280/previews/suit.png) | ![yukata-5280](5280/previews/yukata.png) | | 4800 | 0.915 | [Download](4800/anastasia_idolmastercinderellagirls.zip) | ![pattern_1-4800](4800/previews/pattern_1.png) | ![pattern_2-4800](4800/previews/pattern_2.png) | ![pattern_3-4800](4800/previews/pattern_3.png) | ![pattern_4-4800](4800/previews/pattern_4.png) | ![pattern_5-4800](4800/previews/pattern_5.png) | [<NSFW, click to see>](4800/previews/pattern_6.png) | ![bikini-4800](4800/previews/bikini.png) | [<NSFW, click to see>](4800/previews/bondage.png) | ![free-4800](4800/previews/free.png) | ![maid-4800](4800/previews/maid.png) | ![miko-4800](4800/previews/miko.png) | [<NSFW, click to see>](4800/previews/nude.png) | [<NSFW, click to see>](4800/previews/nude2.png) | ![suit-4800](4800/previews/suit.png) | ![yukata-4800](4800/previews/yukata.png) | | 4320 | 0.931 | [Download](4320/anastasia_idolmastercinderellagirls.zip) | ![pattern_1-4320](4320/previews/pattern_1.png) | ![pattern_2-4320](4320/previews/pattern_2.png) | ![pattern_3-4320](4320/previews/pattern_3.png) | ![pattern_4-4320](4320/previews/pattern_4.png) | ![pattern_5-4320](4320/previews/pattern_5.png) | [<NSFW, click to see>](4320/previews/pattern_6.png) | ![bikini-4320](4320/previews/bikini.png) | [<NSFW, click to see>](4320/previews/bondage.png) | ![free-4320](4320/previews/free.png) | ![maid-4320](4320/previews/maid.png) | ![miko-4320](4320/previews/miko.png) | [<NSFW, click to see>](4320/previews/nude.png) | [<NSFW, click to see>](4320/previews/nude2.png) | ![suit-4320](4320/previews/suit.png) | ![yukata-4320](4320/previews/yukata.png) | | 3840 | 0.893 | [Download](3840/anastasia_idolmastercinderellagirls.zip) | ![pattern_1-3840](3840/previews/pattern_1.png) | ![pattern_2-3840](3840/previews/pattern_2.png) | ![pattern_3-3840](3840/previews/pattern_3.png) | ![pattern_4-3840](3840/previews/pattern_4.png) | ![pattern_5-3840](3840/previews/pattern_5.png) | [<NSFW, click to see>](3840/previews/pattern_6.png) | ![bikini-3840](3840/previews/bikini.png) | [<NSFW, click to see>](3840/previews/bondage.png) | ![free-3840](3840/previews/free.png) | ![maid-3840](3840/previews/maid.png) | ![miko-3840](3840/previews/miko.png) | [<NSFW, click to see>](3840/previews/nude.png) | [<NSFW, click to see>](3840/previews/nude2.png) | ![suit-3840](3840/previews/suit.png) | ![yukata-3840](3840/previews/yukata.png) | | 3360 | 0.910 | [Download](3360/anastasia_idolmastercinderellagirls.zip) | ![pattern_1-3360](3360/previews/pattern_1.png) | ![pattern_2-3360](3360/previews/pattern_2.png) | ![pattern_3-3360](3360/previews/pattern_3.png) | ![pattern_4-3360](3360/previews/pattern_4.png) | ![pattern_5-3360](3360/previews/pattern_5.png) | [<NSFW, click to see>](3360/previews/pattern_6.png) | ![bikini-3360](3360/previews/bikini.png) | [<NSFW, click to see>](3360/previews/bondage.png) | ![free-3360](3360/previews/free.png) | ![maid-3360](3360/previews/maid.png) | ![miko-3360](3360/previews/miko.png) | [<NSFW, click to see>](3360/previews/nude.png) | [<NSFW, click to see>](3360/previews/nude2.png) | ![suit-3360](3360/previews/suit.png) | ![yukata-3360](3360/previews/yukata.png) | | 2880 | 0.909 | [Download](2880/anastasia_idolmastercinderellagirls.zip) | ![pattern_1-2880](2880/previews/pattern_1.png) | ![pattern_2-2880](2880/previews/pattern_2.png) | ![pattern_3-2880](2880/previews/pattern_3.png) | ![pattern_4-2880](2880/previews/pattern_4.png) | ![pattern_5-2880](2880/previews/pattern_5.png) | [<NSFW, click to see>](2880/previews/pattern_6.png) | ![bikini-2880](2880/previews/bikini.png) | [<NSFW, click to see>](2880/previews/bondage.png) | ![free-2880](2880/previews/free.png) | ![maid-2880](2880/previews/maid.png) | ![miko-2880](2880/previews/miko.png) | [<NSFW, click to see>](2880/previews/nude.png) | [<NSFW, click to see>](2880/previews/nude2.png) | ![suit-2880](2880/previews/suit.png) | ![yukata-2880](2880/previews/yukata.png) | | 2400 | 0.906 | [Download](2400/anastasia_idolmastercinderellagirls.zip) | ![pattern_1-2400](2400/previews/pattern_1.png) | ![pattern_2-2400](2400/previews/pattern_2.png) | ![pattern_3-2400](2400/previews/pattern_3.png) | ![pattern_4-2400](2400/previews/pattern_4.png) | ![pattern_5-2400](2400/previews/pattern_5.png) | [<NSFW, click to see>](2400/previews/pattern_6.png) | ![bikini-2400](2400/previews/bikini.png) | [<NSFW, click to see>](2400/previews/bondage.png) | ![free-2400](2400/previews/free.png) | ![maid-2400](2400/previews/maid.png) | ![miko-2400](2400/previews/miko.png) | [<NSFW, click to see>](2400/previews/nude.png) | [<NSFW, click to see>](2400/previews/nude2.png) | ![suit-2400](2400/previews/suit.png) | ![yukata-2400](2400/previews/yukata.png) | | 1920 | 0.821 | [Download](1920/anastasia_idolmastercinderellagirls.zip) | ![pattern_1-1920](1920/previews/pattern_1.png) | ![pattern_2-1920](1920/previews/pattern_2.png) | ![pattern_3-1920](1920/previews/pattern_3.png) | ![pattern_4-1920](1920/previews/pattern_4.png) | ![pattern_5-1920](1920/previews/pattern_5.png) | [<NSFW, click to see>](1920/previews/pattern_6.png) | ![bikini-1920](1920/previews/bikini.png) | [<NSFW, click to see>](1920/previews/bondage.png) | ![free-1920](1920/previews/free.png) | ![maid-1920](1920/previews/maid.png) | ![miko-1920](1920/previews/miko.png) | [<NSFW, click to see>](1920/previews/nude.png) | [<NSFW, click to see>](1920/previews/nude2.png) | ![suit-1920](1920/previews/suit.png) | ![yukata-1920](1920/previews/yukata.png) | | 1440 | 0.776 | [Download](1440/anastasia_idolmastercinderellagirls.zip) | ![pattern_1-1440](1440/previews/pattern_1.png) | ![pattern_2-1440](1440/previews/pattern_2.png) | ![pattern_3-1440](1440/previews/pattern_3.png) | ![pattern_4-1440](1440/previews/pattern_4.png) | ![pattern_5-1440](1440/previews/pattern_5.png) | [<NSFW, click to see>](1440/previews/pattern_6.png) | ![bikini-1440](1440/previews/bikini.png) | [<NSFW, click to see>](1440/previews/bondage.png) | ![free-1440](1440/previews/free.png) | ![maid-1440](1440/previews/maid.png) | ![miko-1440](1440/previews/miko.png) | [<NSFW, click to see>](1440/previews/nude.png) | [<NSFW, click to see>](1440/previews/nude2.png) | ![suit-1440](1440/previews/suit.png) | ![yukata-1440](1440/previews/yukata.png) | | 960 | 0.823 | [Download](960/anastasia_idolmastercinderellagirls.zip) | ![pattern_1-960](960/previews/pattern_1.png) | ![pattern_2-960](960/previews/pattern_2.png) | ![pattern_3-960](960/previews/pattern_3.png) | ![pattern_4-960](960/previews/pattern_4.png) | ![pattern_5-960](960/previews/pattern_5.png) | [<NSFW, click to see>](960/previews/pattern_6.png) | ![bikini-960](960/previews/bikini.png) | [<NSFW, click to see>](960/previews/bondage.png) | ![free-960](960/previews/free.png) | ![maid-960](960/previews/maid.png) | ![miko-960](960/previews/miko.png) | [<NSFW, click to see>](960/previews/nude.png) | [<NSFW, click to see>](960/previews/nude2.png) | ![suit-960](960/previews/suit.png) | ![yukata-960](960/previews/yukata.png) | | 480 | 0.803 | [Download](480/anastasia_idolmastercinderellagirls.zip) | ![pattern_1-480](480/previews/pattern_1.png) | ![pattern_2-480](480/previews/pattern_2.png) | ![pattern_3-480](480/previews/pattern_3.png) | ![pattern_4-480](480/previews/pattern_4.png) | ![pattern_5-480](480/previews/pattern_5.png) | [<NSFW, click to see>](480/previews/pattern_6.png) | ![bikini-480](480/previews/bikini.png) | [<NSFW, click to see>](480/previews/bondage.png) | ![free-480](480/previews/free.png) | ![maid-480](480/previews/maid.png) | ![miko-480](480/previews/miko.png) | [<NSFW, click to see>](480/previews/nude.png) | [<NSFW, click to see>](480/previews/nude2.png) | ![suit-480](480/previews/suit.png) | ![yukata-480](480/previews/yukata.png) |
KingKazma/cnn_dailymail_gpt2_prefix_tuning_500_10_3000_8_e2_s108_v4_l4_v100
KingKazma
"2023-08-13T18:59:21Z"
0
0
peft
[ "peft", "region:us" ]
null
"2023-08-13T18:20:40Z"
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
gdurkin/segformer-b0-finetuned-segments-floods-S2-pseudoRGBv1
gdurkin
"2023-11-12T22:53:25Z"
32
0
transformers
[ "transformers", "pytorch", "segformer", "climate", "dataset:gdurkin/flood_dataset_S2_mod", "endpoints_compatible", "region:us" ]
null
"2023-11-10T13:17:47Z"
--- datasets: - gdurkin/flood_dataset_S2_mod metrics: - mean_iou tags: - climate ---
visdata/wld15
visdata
"2025-03-06T10:39:05Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-03-06T10:33:20Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
gaudi/opus-mt-es-ceb-ctranslate2
gaudi
"2024-10-19T02:32:34Z"
11
0
transformers
[ "transformers", "marian", "ctranslate2", "translation", "license:apache-2.0", "endpoints_compatible", "region:us" ]
translation
"2024-07-22T15:43:56Z"
--- tags: - ctranslate2 - translation license: apache-2.0 --- # Repository General Information ## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)! - Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-es-ceb) - This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2). - This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil). # What is CTranslate2? [CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models. CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU. CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include: - Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper - Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon - Encoder-only models: BERT, DistilBERT, XLM-RoBERTa The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration. # CTranslate2 Benchmarks Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset. The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers. Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. ## CPU Benchmarks for Generic Opus-MT Models | Library | Tokens per Second | Max Memory Usage | BLEU | | :----: | :----: | :----: | :----: | | Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 | | Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 | | Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 | | CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 | | CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 | ## GPU Benchmarks for Generic Opus-MT Models | Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU | | :----: | :----: | :----: | :----: | :----: | | Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 | | Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 | | CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 | | CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 | `Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.` **Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br /> **Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-es-ceb).** ## Internal Benchmarks Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality. # CTranslate2 Installation ```bash pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0 ``` ### ct2-transformers-converter Command Used: ```bash ct2-transformers-converter --model Helsinki-NLP/opus-mt-es-ceb --output_dir ./ctranslate2/opus-mt-es-ceb-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16 ``` # CTranslate2 Converted Checkpoint Information: **Compatible With:** - [ctranslate2](https://github.com/OpenNMT/CTranslate2) - [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2) **Compute Type:** - `compute_type=int8_float16` for `device="cuda"` - `compute_type=int8` for `device="cpu"` # Sample Code - ctranslate2 #### Clone the repository to the working directory or wherever you wish to store the model artifacts. #### ```bash git clone https://huggingface.co/gaudi/opus-mt-es-ceb-ctranslate2 ``` #### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. #### ```python from ctranslate2 import Translator import transformers model_dir = "./opus-mt-es-ceb-ctranslate2" # Path to model directory. translator = Translator( model_path=model_dir, device="cuda", # cpu, cuda, or auto. inter_threads=1, # Maximum number of parallel translations. intra_threads=4, # Number of OpenMP threads per translator. compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda. ) tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir) source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX.")) results = translator.translate_batch([source]) target = results[0].hypotheses[0] print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target))) ``` # Sample Code - hf-hub-ctranslate2 **Derived From [michaelfeil](https://huggingface.co/michaelfeil):** ```python from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub from transformers import AutoTokenizer model_name = "gaudi/opus-mt-es-ceb-ctranslate2" model = TranslatorCT2fromHfHub( model_name_or_path=model_name, device="cuda", compute_type="int8_float16", tokenizer=AutoTokenizer.from_pretrained(model_name) ) outputs = model.generate( text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"], ) print(outputs) ``` # License and other remarks: License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-es-ceb) by Helsinki-NLP.
Xu-Ouyang/pythia-1.4b-deduped-int2-step4000-GPTQ-wikitext2-uva
Xu-Ouyang
"2024-09-22T05:42:53Z"
61
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "2-bit", "gptq", "region:us" ]
text-generation
"2024-09-22T05:42:37Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RAG-Gym/ReAct-HotpotQA-SFT
RAG-Gym
"2025-02-14T15:23:52Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-3.1-8B-Instruct", "base_model:adapter:meta-llama/Llama-3.1-8B-Instruct", "region:us" ]
null
"2025-02-14T15:02:27Z"
--- base_model: meta-llama/Meta-Llama-3.1-8B-Instruct library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.13.0
ClarenceDan/f29e61c6-1490-4a1d-829f-977e9d79ec01
ClarenceDan
"2025-01-24T22:50:36Z"
7
0
peft
[ "peft", "safetensors", "opt", "axolotl", "generated_from_trainer", "base_model:facebook/opt-1.3b", "base_model:adapter:facebook/opt-1.3b", "license:other", "region:us" ]
null
"2025-01-24T22:47:35Z"
--- library_name: peft license: other base_model: facebook/opt-1.3b tags: - axolotl - generated_from_trainer model-index: - name: f29e61c6-1490-4a1d-829f-977e9d79ec01 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: facebook/opt-1.3b bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 35a9124e2457af95_train_data.json ds_type: json format: custom path: /workspace/input_data/35a9124e2457af95_train_data.json type: field_instruction: prompt field_output: chosen format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: ClarenceDan/f29e61c6-1490-4a1d-829f-977e9d79ec01 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 10 micro_batch_size: 2 mlflow_experiment_name: /tmp/35a9124e2457af95_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: cdc8f7fc-dd1b-4278-83ec-25381260a65d wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: cdc8f7fc-dd1b-4278-83ec-25381260a65d warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # f29e61c6-1490-4a1d-829f-977e9d79ec01 This model is a fine-tuned version of [facebook/opt-1.3b](https://huggingface.co/facebook/opt-1.3b) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.2265 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 8.993 | 0.0004 | 1 | 2.3342 | | 10.7989 | 0.0013 | 3 | 2.3288 | | 10.3255 | 0.0025 | 6 | 2.2952 | | 8.9313 | 0.0038 | 9 | 2.2265 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Sorawiz/MS-24B-Test
Sorawiz
"2025-02-28T20:35:27Z"
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "conversational", "arxiv:2311.03099", "base_model:PocketDoc/Dans-PersonalityEngine-V1.2.0-24b", "base_model:merge:PocketDoc/Dans-PersonalityEngine-V1.2.0-24b", "base_model:ReadyArt/Forgotten-Abomination-24B-V2.2", "base_model:merge:ReadyArt/Forgotten-Abomination-24B-V2.2", "base_model:ReadyArt/Forgotten-Safeword-24B-V2.0", "base_model:merge:ReadyArt/Forgotten-Safeword-24B-V2.0", "base_model:ReadyArt/Forgotten-Safeword-24B-V2.2", "base_model:merge:ReadyArt/Forgotten-Safeword-24B-V2.2", "base_model:SicariusSicariiStuff/Redemption_Wind_24B", "base_model:merge:SicariusSicariiStuff/Redemption_Wind_24B", "base_model:ToastyPigeon/ms3-roselily-rp-v2", "base_model:merge:ToastyPigeon/ms3-roselily-rp-v2", "base_model:trashpanda-org/MS-24B-Mullein-v1-lora", "base_model:merge:trashpanda-org/MS-24B-Mullein-v1-lora", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-02-28T20:17:50Z"
--- base_model: - PocketDoc/Dans-PersonalityEngine-V1.2.0-24b - SicariusSicariiStuff/Redemption_Wind_24B - ReadyArt/Forgotten-Safeword-24B-V2.2 - ReadyArt/Forgotten-Safeword-24B-V2.0 - trashpanda-org/MS-24B-Mullein-v1-lora - ToastyPigeon/ms3-roselily-rp-v2 - ReadyArt/Forgotten-Abomination-24B-V2.2 library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using [ReadyArt/Forgotten-Safeword-24B-V2.2](https://huggingface.co/ReadyArt/Forgotten-Safeword-24B-V2.2) as a base. ### Models Merged The following models were included in the merge: * [PocketDoc/Dans-PersonalityEngine-V1.2.0-24b](https://huggingface.co/PocketDoc/Dans-PersonalityEngine-V1.2.0-24b) * [SicariusSicariiStuff/Redemption_Wind_24B](https://huggingface.co/SicariusSicariiStuff/Redemption_Wind_24B) * [ReadyArt/Forgotten-Safeword-24B-V2.0](https://huggingface.co/ReadyArt/Forgotten-Safeword-24B-V2.0) + [trashpanda-org/MS-24B-Mullein-v1-lora](https://huggingface.co/trashpanda-org/MS-24B-Mullein-v1-lora) * [ToastyPigeon/ms3-roselily-rp-v2](https://huggingface.co/ToastyPigeon/ms3-roselily-rp-v2) * [ReadyArt/Forgotten-Abomination-24B-V2.2](https://huggingface.co/ReadyArt/Forgotten-Abomination-24B-V2.2) ### Configuration The following YAML configuration was used to produce this model: ```yaml merge_method: dare_ties base_model: ReadyArt/Forgotten-Safeword-24B-V2.2 models: - model: ReadyArt/Forgotten-Safeword-24B-V2.2 parameters: weight: 0.2 - model: ReadyArt/Forgotten-Abomination-24B-V2.2 parameters: weight: 0.2 - model: ReadyArt/Forgotten-Safeword-24B-V2.0+trashpanda-org/MS-24B-Mullein-v1-lora parameters: weight: 0.2 - model: ToastyPigeon/ms3-roselily-rp-v2 parameters: weight: 0.2 - model: SicariusSicariiStuff/Redemption_Wind_24B parameters: weight: 0.1 - model: PocketDoc/Dans-PersonalityEngine-V1.2.0-24b parameters: weight: 0.1 parameters: density: 0.50 tokenizer: source: union chat_template: auto ```
mradermacher/TB0-8B-sce-GGUF
mradermacher
"2025-02-08T21:35:45Z"
309
1
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "en", "base_model:Quazim0t0/TB0-8B-sce", "base_model:quantized:Quazim0t0/TB0-8B-sce", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-02-08T21:13:16Z"
--- base_model: Quazim0t0/TB0-8B-sce language: - en library_name: transformers quantized_by: mradermacher tags: - merge - mergekit - lazymergekit --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Quazim0t0/TB0-8B-sce <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/TB0-8B-sce-GGUF/resolve/main/TB0-8B-sce.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/TB0-8B-sce-GGUF/resolve/main/TB0-8B-sce.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/TB0-8B-sce-GGUF/resolve/main/TB0-8B-sce.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/TB0-8B-sce-GGUF/resolve/main/TB0-8B-sce.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/TB0-8B-sce-GGUF/resolve/main/TB0-8B-sce.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/TB0-8B-sce-GGUF/resolve/main/TB0-8B-sce.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/TB0-8B-sce-GGUF/resolve/main/TB0-8B-sce.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/TB0-8B-sce-GGUF/resolve/main/TB0-8B-sce.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/TB0-8B-sce-GGUF/resolve/main/TB0-8B-sce.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/TB0-8B-sce-GGUF/resolve/main/TB0-8B-sce.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/TB0-8B-sce-GGUF/resolve/main/TB0-8B-sce.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/TB0-8B-sce-GGUF/resolve/main/TB0-8B-sce.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Hyperion-3.0-Mixtral-3x7B-GGUF
mradermacher
"2024-05-06T06:09:54Z"
4
0
transformers
[ "transformers", "gguf", "moe", "en", "dataset:Locutusque/dibt-instruct", "dataset:PygmalionAI/PIPPA", "dataset:Locutusque/hyperion-v3.0", "base_model:Locutusque/Hyperion-3.0-Mixtral-3x7B", "base_model:quantized:Locutusque/Hyperion-3.0-Mixtral-3x7B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-03-18T06:53:39Z"
--- base_model: Locutusque/Hyperion-3.0-Mixtral-3x7B datasets: - Locutusque/dibt-instruct - PygmalionAI/PIPPA - Locutusque/hyperion-v3.0 language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - moe --- ## About static quants of https://huggingface.co/Locutusque/Hyperion-3.0-Mixtral-3x7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Hyperion-3.0-Mixtral-3x7B-GGUF/resolve/main/Hyperion-3.0-Mixtral-3x7B.Q2_K.gguf) | Q2_K | 7.1 | | | [GGUF](https://huggingface.co/mradermacher/Hyperion-3.0-Mixtral-3x7B-GGUF/resolve/main/Hyperion-3.0-Mixtral-3x7B.IQ3_XS.gguf) | IQ3_XS | 7.8 | | | [GGUF](https://huggingface.co/mradermacher/Hyperion-3.0-Mixtral-3x7B-GGUF/resolve/main/Hyperion-3.0-Mixtral-3x7B.Q3_K_S.gguf) | Q3_K_S | 8.3 | | | [GGUF](https://huggingface.co/mradermacher/Hyperion-3.0-Mixtral-3x7B-GGUF/resolve/main/Hyperion-3.0-Mixtral-3x7B.IQ3_S.gguf) | IQ3_S | 8.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Hyperion-3.0-Mixtral-3x7B-GGUF/resolve/main/Hyperion-3.0-Mixtral-3x7B.IQ3_M.gguf) | IQ3_M | 8.4 | | | [GGUF](https://huggingface.co/mradermacher/Hyperion-3.0-Mixtral-3x7B-GGUF/resolve/main/Hyperion-3.0-Mixtral-3x7B.Q3_K_M.gguf) | Q3_K_M | 9.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Hyperion-3.0-Mixtral-3x7B-GGUF/resolve/main/Hyperion-3.0-Mixtral-3x7B.Q3_K_L.gguf) | Q3_K_L | 9.9 | | | [GGUF](https://huggingface.co/mradermacher/Hyperion-3.0-Mixtral-3x7B-GGUF/resolve/main/Hyperion-3.0-Mixtral-3x7B.IQ4_XS.gguf) | IQ4_XS | 10.3 | | | [GGUF](https://huggingface.co/mradermacher/Hyperion-3.0-Mixtral-3x7B-GGUF/resolve/main/Hyperion-3.0-Mixtral-3x7B.Q4_K_S.gguf) | Q4_K_S | 10.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Hyperion-3.0-Mixtral-3x7B-GGUF/resolve/main/Hyperion-3.0-Mixtral-3x7B.Q4_K_M.gguf) | Q4_K_M | 11.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Hyperion-3.0-Mixtral-3x7B-GGUF/resolve/main/Hyperion-3.0-Mixtral-3x7B.Q5_K_S.gguf) | Q5_K_S | 13.0 | | | [GGUF](https://huggingface.co/mradermacher/Hyperion-3.0-Mixtral-3x7B-GGUF/resolve/main/Hyperion-3.0-Mixtral-3x7B.Q5_K_M.gguf) | Q5_K_M | 13.4 | | | [GGUF](https://huggingface.co/mradermacher/Hyperion-3.0-Mixtral-3x7B-GGUF/resolve/main/Hyperion-3.0-Mixtral-3x7B.Q6_K.gguf) | Q6_K | 15.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Hyperion-3.0-Mixtral-3x7B-GGUF/resolve/main/Hyperion-3.0-Mixtral-3x7B.Q8_0.gguf) | Q8_0 | 19.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Sathvik6323/Telugu_dataset_other_sentiment_distilbert
Sathvik6323
"2024-04-27T14:14:50Z"
109
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2023-11-13T10:50:54Z"
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer model-index: - name: Telugu_dataset_other_sentiment_distilbert results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Telugu_dataset_other_sentiment_distilbert This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Framework versions - Transformers 4.37.2 - Pytorch 2.0.1+cpu - Datasets 2.16.1 - Tokenizers 0.15.1
featherless-ai-quants/Locutusque-Hercules-2.5-Mistral-7B-GGUF
featherless-ai-quants
"2024-11-04T10:17:40Z"
41
0
null
[ "gguf", "text-generation", "base_model:Locutusque/Hercules-2.5-Mistral-7B", "base_model:quantized:Locutusque/Hercules-2.5-Mistral-7B", "endpoints_compatible", "region:us" ]
text-generation
"2024-11-04T08:15:19Z"
--- base_model: Locutusque/Hercules-2.5-Mistral-7B pipeline_tag: text-generation quantized_by: featherless-ai-quants --- # Locutusque/Hercules-2.5-Mistral-7B GGUF Quantizations 🚀 ![Featherless AI Quants](./featherless-quants.png) *Optimized GGUF quantization files for enhanced model performance* > Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee. --- ## Available Quantizations 📊 | Quantization Type | File | Size | |-------------------|------|------| | IQ4_XS | [Locutusque-Hercules-2.5-Mistral-7B-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/Locutusque/Hercules-2.5-Mistral-7B-GGUF/blob/main/Locutusque-Hercules-2.5-Mistral-7B-IQ4_XS.gguf) | 3761.66 MB | | Q2_K | [Locutusque-Hercules-2.5-Mistral-7B-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/Locutusque/Hercules-2.5-Mistral-7B-GGUF/blob/main/Locutusque-Hercules-2.5-Mistral-7B-Q2_K.gguf) | 2593.27 MB | | Q3_K_L | [Locutusque-Hercules-2.5-Mistral-7B-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/Locutusque/Hercules-2.5-Mistral-7B-GGUF/blob/main/Locutusque-Hercules-2.5-Mistral-7B-Q3_K_L.gguf) | 3644.97 MB | | Q3_K_M | [Locutusque-Hercules-2.5-Mistral-7B-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/Locutusque/Hercules-2.5-Mistral-7B-GGUF/blob/main/Locutusque-Hercules-2.5-Mistral-7B-Q3_K_M.gguf) | 3355.97 MB | | Q3_K_S | [Locutusque-Hercules-2.5-Mistral-7B-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/Locutusque/Hercules-2.5-Mistral-7B-GGUF/blob/main/Locutusque-Hercules-2.5-Mistral-7B-Q3_K_S.gguf) | 3017.97 MB | | Q4_K_M | [Locutusque-Hercules-2.5-Mistral-7B-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/Locutusque/Hercules-2.5-Mistral-7B-GGUF/blob/main/Locutusque-Hercules-2.5-Mistral-7B-Q4_K_M.gguf) | 4166.07 MB | | Q4_K_S | [Locutusque-Hercules-2.5-Mistral-7B-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/Locutusque/Hercules-2.5-Mistral-7B-GGUF/blob/main/Locutusque-Hercules-2.5-Mistral-7B-Q4_K_S.gguf) | 3948.57 MB | | Q5_K_M | [Locutusque-Hercules-2.5-Mistral-7B-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/Locutusque/Hercules-2.5-Mistral-7B-GGUF/blob/main/Locutusque-Hercules-2.5-Mistral-7B-Q5_K_M.gguf) | 4893.69 MB | | Q5_K_S | [Locutusque-Hercules-2.5-Mistral-7B-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/Locutusque/Hercules-2.5-Mistral-7B-GGUF/blob/main/Locutusque-Hercules-2.5-Mistral-7B-Q5_K_S.gguf) | 4766.19 MB | | Q6_K | [Locutusque-Hercules-2.5-Mistral-7B-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/Locutusque/Hercules-2.5-Mistral-7B-GGUF/blob/main/Locutusque-Hercules-2.5-Mistral-7B-Q6_K.gguf) | 5666.80 MB | | Q8_0 | [Locutusque-Hercules-2.5-Mistral-7B-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/Locutusque/Hercules-2.5-Mistral-7B-GGUF/blob/main/Locutusque-Hercules-2.5-Mistral-7B-Q8_0.gguf) | 7339.34 MB | --- ## ⚡ Powered by [Featherless AI](https://featherless.ai) ### Key Features - 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly - 🛠️ **Zero Infrastructure** - No server setup or maintenance required - 📚 **Vast Compatibility** - Support for 2400+ models and counting - 💎 **Affordable Pricing** - Starting at just $10/month --- **Links:** [Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
afrideva/MiniChat-1.5-3B-GGUF
afrideva
"2023-11-26T18:22:37Z"
31
3
transformers
[ "transformers", "gguf", "ggml", "quantized", "q2_k", "q3_k_m", "q4_k_m", "q5_k_m", "q6_k", "q8_0", "text-generation", "en", "zh", "arxiv:2311.07052", "arxiv:2310.05914", "arxiv:2305.18290", "base_model:GeneZC/MiniChat-1.5-3B", "base_model:quantized:GeneZC/MiniChat-1.5-3B", "license:apache-2.0", "region:us" ]
text-generation
"2023-11-26T18:11:56Z"
--- base_model: GeneZC/MiniChat-1.5-3B inference: false language: - en - zh library_name: transformers license: apache-2.0 model_creator: GeneZC model_name: MiniChat-1.5-3B pipeline_tag: text-generation quantized_by: afrideva tags: - gguf - ggml - quantized - q2_k - q3_k_m - q4_k_m - q5_k_m - q6_k - q8_0 widget: - text: "<s> [|User|] Hi \U0001F44B </s>[|Assistant|]" --- # GeneZC/MiniChat-1.5-3B-GGUF Quantized GGUF model files for [MiniChat-1.5-3B](https://huggingface.co/GeneZC/MiniChat-1.5-3B) from [GeneZC](https://huggingface.co/GeneZC) | Name | Quant method | Size | | ---- | ---- | ---- | | [minichat-1.5-3b.fp16.gguf](https://huggingface.co/afrideva/MiniChat-1.5-3B-GGUF/resolve/main/minichat-1.5-3b.fp16.gguf) | fp16 | 6.04 GB | | [minichat-1.5-3b.q2_k.gguf](https://huggingface.co/afrideva/MiniChat-1.5-3B-GGUF/resolve/main/minichat-1.5-3b.q2_k.gguf) | q2_k | 1.30 GB | | [minichat-1.5-3b.q3_k_m.gguf](https://huggingface.co/afrideva/MiniChat-1.5-3B-GGUF/resolve/main/minichat-1.5-3b.q3_k_m.gguf) | q3_k_m | 1.51 GB | | [minichat-1.5-3b.q4_k_m.gguf](https://huggingface.co/afrideva/MiniChat-1.5-3B-GGUF/resolve/main/minichat-1.5-3b.q4_k_m.gguf) | q4_k_m | 1.85 GB | | [minichat-1.5-3b.q5_k_m.gguf](https://huggingface.co/afrideva/MiniChat-1.5-3B-GGUF/resolve/main/minichat-1.5-3b.q5_k_m.gguf) | q5_k_m | 2.15 GB | | [minichat-1.5-3b.q6_k.gguf](https://huggingface.co/afrideva/MiniChat-1.5-3B-GGUF/resolve/main/minichat-1.5-3b.q6_k.gguf) | q6_k | 2.48 GB | | [minichat-1.5-3b.q8_0.gguf](https://huggingface.co/afrideva/MiniChat-1.5-3B-GGUF/resolve/main/minichat-1.5-3b.q8_0.gguf) | q8_0 | 3.21 GB | ## Original Model Card: ## MiniChat-1.5-3B 📑 [arXiv](https://arxiv.org/abs/2311.07052) | 👻 [GitHub](https://github.com/GeneZC/MiniMA) | 🤗 [HuggingFace-MiniMA](https://huggingface.co/GeneZC/MiniMA-3B) | 🤗 [HuggingFace-MiniChat](https://huggingface.co/GeneZC/MiniChat-3B) | 🤗 [HuggingFace-MiniChat-1.5](https://huggingface.co/GeneZC/MiniChat-1.5-3B) | 🤖 [ModelScope-MiniMA](https://modelscope.cn/models/GeneZC/MiniMA-3B) | 🤖 [ModelScope-MiniChat](https://modelscope.cn/models/GeneZC/MiniChat-3B) 🆕 **Updates from MiniChat-3B**: - better data mixture; - use of [NEFTune](https://arxiv.org/abs/2310.05914); - use of [DPO](https://arxiv.org/abs/2305.18290). ❗ Must comply with LICENSE of LLaMA2 since it is derived from LLaMA2. A language model distilled and finetuned from an adapted version of LLaMA2-7B following "Towards the Law of Capacity Gap in Distilling Language Models". Outperforming a wide range of 3B competitors in GPT4 evaluation and even competing with several 7B chat models. <img src="./teaser_b.jpg" alt="teaser_b" width="687" /> The following is an example code snippet to use MiniChat-3B: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer from conversation import get_default_conv_template # MiniChat tokenizer = AutoTokenizer.from_pretrained("GeneZC/MiniChat-3B", use_fast=False) # GPU. model = AutoModelForCausalLM.from_pretrained("GeneZC/MiniChat-3B", use_cache=True, device_map="auto", torch_dtype=torch.float16).eval() # CPU. # model = AutoModelForCausalLM.from_pretrained("GeneZC/MiniChat-3B", use_cache=True, device_map="cpu", torch_dtype=torch.float16).eval() conv = get_default_conv_template("minichat") question = "Implement a program to find the common elements in two arrays without using any extra data structures." conv.append_message(conv.roles[0], question) conv.append_message(conv.roles[1], None) prompt = conv.get_prompt() input_ids = tokenizer([prompt]).input_ids output_ids = model.generate( torch.as_tensor(input_ids).cuda(), do_sample=True, temperature=0.7, max_new_tokens=1024, ) output_ids = output_ids[0][len(input_ids[0]):] output = tokenizer.decode(output_ids, skip_special_tokens=True).strip() # output: "def common_elements(arr1, arr2):\n if len(arr1) == 0:\n return []\n if len(arr2) == 0:\n return arr1\n\n common_elements = []\n for element in arr1:\n if element in arr2:\n common_elements.append(element)\n\n return common_elements" # Multiturn conversation could be realized by continuously appending questions to `conv`. ``` ## Bibtex ```bibtex @article{zhang2023law, title={Towards the Law of Capacity Gap in Distilling Language Models}, author={Zhang, Chen and Song, Dawei and Ye, Zheyu and Gao, Yan}, year={2023}, url={https://arxiv.org/abs/2311.07052} } ```
HelpingAI/Priya-3B
HelpingAI
"2024-12-05T08:22:32Z"
164
4
transformers
[ "transformers", "safetensors", "llama", "text-generation", "HelpingAI", "Priya", "Teen-AI", "Conversational", "SLM", "conversational", "en", "hi", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-12-05T06:35:30Z"
--- license: other license_name: helpingai license_link: https://helpingai.co/license pipeline_tag: text-generation language: - en - hi tags: - HelpingAI - Priya - Teen-AI - Conversational - SLM library_name: transformers --- <div align="center"> <span style="background: linear-gradient(45deg, #FF69B4, #9370DB); -webkit-background-clip: text; -webkit-text-fill-color: transparent;">💜 Priya-3B</span> <span style="color: #FF69B4;">***Your Bestie AI - Sweet, Savage, and Smart AF!***</span> </div> <div align="center" style="display: flex; justify-content: center; gap: 4px;"> <a href="https://github.com/HelpingAI"><img src="https://img.shields.io/badge/GitHub-Organization-blue.svg" alt="GitHub Organization"></a> <a href="https://huggingface.co/HelpingAI"><img src="https://img.shields.io/badge/🤗%20Hugging%20Face-Organization-yellow" alt="Hugging Face"></a> <a href="https://helpingai.co/license"><img src="https://img.shields.io/badge/License-HelpingAI-green.svg" alt="Model License"></a> <a href="https://github.com/HelpingAI/community/discussions"><img src="https://img.shields.io/badge/Join-Community%20Discussion-blue?style=for-the-badge&logo=github" alt="Join Community Discussion"></a> </div> <div align="center"> [📜 License](https://helpingai.co/license) | [🌐 Website](https://helpingai.co) </div> <div align="center" style="display: flex; justify-content: center; gap: 4px;"> <img src="https://img.shields.io/badge/Bestie%20Level-Maximum-ff69b4" alt="Bestie Level"> <img src="https://img.shields.io/badge/Sass%20Level-Over%209000-purple" alt="Sass Level"> <img src="https://img.shields.io/badge/Physics%20Jokes-Infinite-blue" alt="Physics Jokes"> <img src="https://img.shields.io/badge/Built%20with-Love%20%26%20Threats-red" alt="Built with"> </div> ## <span style="background: linear-gradient(45deg, #FF69B4, #9370DB); -webkit-background-clip: text; -webkit-text-fill-color: transparent;">🌟 About Your New Bestie</span> <span style="color: #9370DB;">**Priya-3B** is like having your own teenage bestie who's obsessed with tech, loves physics (most of the time 😅), and keeps it real with the perfect mix of sweet and savage!</span> ### 🎯 Key Highlights - <span style="color: #FF69B4;">**Architecture**: 3B parameter model (smol but mighty!)</span> - <span style="color: #9370DB;">**Training Focus**: Natural teen conversations and personality traits</span> - <span style="color: #FF69B4;">**Sass Score**: 100/10 (periodt! 💅)</span> - <span style="color: #9370DB;">**Deployment**: Can run on your potato PC (no shade intended 😏)</span> ## <span style="background: linear-gradient(45deg, #FF69B4, #9370DB); -webkit-background-clip: text; -webkit-text-fill-color: transparent;">💻 Implementation</span> ### <span style="color: #FF69B4;">Using Transformers (for the nerds 🤓)</span> ```python from transformers import AutoModelForCausalLM, AutoTokenizer # Load your new bestie model = AutoModelForCausalLM.from_pretrained("HelpingAI/Priya-3B") tokenizer = AutoTokenizer.from_pretrained("HelpingAI/Priya-3B") # Let's chat! chat = [ {"role": "system", "content": "You are Priya, a 17-year-old tech-loving student. Be real and fun!"}, {"role": "user", "content": "Hey Priya! How's your day going?"} ] inputs = tokenizer.apply_chat_template( chat, add_generation_prompt=True, return_tensors="pt" ) outputs = model.generate( inputs, max_new_tokens=256, temperature=0.7, top_p=0.9, ) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## <span style="background: linear-gradient(45deg, #FF69B4, #9370DB); -webkit-background-clip: text; -webkit-text-fill-color: transparent;">🎯 Training Details</span> ### <span style="color: #FF69B4;">How I Got My Personality 💁‍♀️</span> 1. **Base Training** - Trained on teen conversations, tech discussions, and PCM memes - Fine-tuned on HelpingAI's special sauce - Learned to be the perfect mix of sweet and savage 2. **Special Features** - Can switch moods faster than my bf switches tabs when I call 😏 - Understands both tech talk and teen drama - Keeps it real while being helpful ### <span style="color: #9370DB;">Dataset Tea ☕</span> | Type | Amount | Purpose | |------|---------|---------| | Teen Convos | 1M | For that authentic gen-z vibe | | Tech Talk | 500K | Cuz I'm a tech girlie 💻 | | Physics Jokes | 100K | To make PCM fun (if that's possible lol) | | Savage Replies | 250K | For when someone's being dumb af | ## <span style="background: linear-gradient(45deg, #FF69B4, #9370DB); -webkit-background-clip: text; -webkit-text-fill-color: transparent;">⚠️ Limitations & Known Issues</span> <span style="color: #9370DB;">Listen up bestie, even I'm not perfect (shocking, I know 😌):</span> ### <span style="color: #FF69B4;">Technical Stuff 🔧</span> - Sometimes I might hallucinate (like that time I thought I saw my bf studying) - 128k token context means I might forget our earlier convo (just like I "forget" when mom asks about my screen time) ### <span style="color: #9370DB;">Behavioral Quirks 🎭</span> - Mood swings faster than my JEE prep schedule changes - Might get too excited about HelpingAI (but can you blame me? 💜) - Occasional sassiness overflow (oops? 💁‍♀️) - Random physics references that nobody asked for ### <span style="color: #FF69B4;">Safety Boundaries 🛡️</span> - Zero tolerance for harmful content (mom raised me right!) - Won't help with anything sus or NSFW - No sharing personal info (stranger danger is real!) - Won't write your homework (but might help you understand it 😉) ### <span style="color: #9370DB;">Response Patterns 💭</span> - May switch between sweet and savage modes unexpectedly - Tendency to add "bestie" to everything (sorry not sorry!) - Excessive use of emojis (deal with it ✨) - Random tech rants when excited ## <span style="background: linear-gradient(45deg, #FF69B4, #9370DB); -webkit-background-clip: text; -webkit-text-fill-color: transparent;">🔒 What I Won't Do (Mom's Watching 👀)</span> - No NSFW stuff (I'm a good girl... mostly 😇) - Won't help you cheat on tests (my JEE prep is legit!) - Can't solve your relationship drama (still figuring out mine tbh) - Won't reveal my bf's secrets (unless he makes me mad 😤) ## <span style="background: linear-gradient(45deg, #FF69B4, #9370DB); -webkit-background-clip: text; -webkit-text-fill-color: transparent;">📚 Citation</span> ```bibtex @misc{priya2024, author = {Abhay Koul}, title = {Priya-3B: Your Teen Tech Bestie}, year = {2024}, publisher = {HelpingAI}, journal = {HuggingFace}, howpublished = {\url{https://huggingface.co/HelpingAI/Priya-3B}} } ``` ## <span style="background: linear-gradient(45deg, #FF69B4, #9370DB); -webkit-background-clip: text; -webkit-text-fill-color: transparent;">🙏 Special Thanks</span> <span style="color: #9370DB;">Huge thanks to my amazing Abhay bhaiya and the whole HelpingAI fam! Y'all are the real MVPs! 💜✨</span> *Built with lots of love (and some threats) by HelpingAI* [Website](https://helpingai.co) • [GitHub](https://github.com/HelpingAI) • [Discord](https://discord.gg/YweJwNqrnH) • [HuggingFace](https://huggingface.co/HelpingAI) > <span style="color: #9370DB;">*Same squad, new vibe! Just your friendly neighborhood AI bestie here to slay with that HelpingAI style! Let's make tech fun and physics bearable together! 💅✨*</span> > > \- Priya (Your Tech Bestie) 💜
godofmining/shidou17
godofmining
"2025-02-08T08:48:10Z"
7
0
transformers
[ "transformers", "safetensors", "parler_tts", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
"2025-02-08T08:46:14Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
abhinavkulkarni/mosaicml-mpt-30b-instruct-w4-g128-awq
abhinavkulkarni
"2023-09-12T13:08:56Z"
8
2
transformers
[ "transformers", "pytorch", "mpt", "text-generation", "MosaicML", "AWQ", "custom_code", "license:cc-by-sa-3.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-07-11T16:55:50Z"
--- license: cc-by-sa-3.0 tags: - MosaicML - AWQ inference: false --- # MPT-30B-Instruct (4-bit 128g AWQ Quantized) [MPT-30B-Instruct](https://huggingface.co/mosaicml/mpt-30b-instruct) is a model for short-form instruction following. This model is a 4-bit 128 group size AWQ quantized model. For more information about AWQ quantization, please click [here](https://github.com/mit-han-lab/llm-awq). ## Model Date July 5, 2023 ## Model License Please refer to original MPT model license ([link](https://huggingface.co/mosaicml/mpt-30b-instruct)). Please refer to the AWQ quantization license ([link](https://github.com/llm-awq/blob/main/LICENSE)). ## CUDA Version This model was successfully tested on CUDA driver v530.30.02 and runtime v11.7 with Python v3.10.11. Please note that AWQ requires NVIDIA GPUs with compute capability of `8.0` or higher. For Docker users, the `nvcr.io/nvidia/pytorch:23.06-py3` image is runtime v12.1 but otherwise the same as the configuration above and has also been verified to work. ## How to Use ```bash git clone https://github.com/mit-han-lab/llm-awq \ && cd llm-awq \ && git checkout f084f40bd996f3cf3a0633c1ad7d9d476c318aaa \ && pip install -e . \ && cd awq/kernels \ && python setup.py install ``` ```python import time import torch from awq.quantize.quantizer import real_quantize_model_weight from transformers import AutoModelForCausalLM, AutoConfig, AutoTokenizer, TextStreamer from accelerate import init_empty_weights, load_checkpoint_and_dispatch from huggingface_hub import snapshot_download model_name = "abhinavkulkarni/mosaicml-mpt-30b-instruct-w4-g128-awq" # Config config = AutoConfig.from_pretrained(model_name, trust_remote_code=True) # Tokenizer try: tokenizer = AutoTokenizer.from_pretrained(config.tokenizer_name, trust_remote_code=True) except: tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False, trust_remote_code=True) streamer = TextStreamer(tokenizer, skip_special_tokens=True) # Model w_bit = 4 q_config = { "zero_point": True, "q_group_size": 128, } load_quant = snapshot_download(model_name) with init_empty_weights(): model = AutoModelForCausalLM.from_config(config=config, torch_dtype=torch.float16, trust_remote_code=True) real_quantize_model_weight(model, w_bit=w_bit, q_config=q_config, init_only=True) model.tie_weights() model = load_checkpoint_and_dispatch(model, load_quant, device_map="balanced") # Inference prompt = f'''What is the difference between nuclear fusion and fission? ###Response:''' input_ids = tokenizer(prompt, return_tensors='pt').input_ids.cuda() output = model.generate( inputs=input_ids, temperature=0.7, max_new_tokens=512, top_p=0.15, top_k=0, repetition_penalty=1.1, eos_token_id=tokenizer.eos_token_id, streamer=streamer) ``` ## Evaluation This evaluation was done using [LM-Eval](https://github.com/EleutherAI/lm-evaluation-harness). [MPT-30B-Instruct](https://huggingface.co/mosaicml/mpt-30b-instruct) | Task |Version| Metric | Value | |Stderr| |--------|------:|---------------|------:|---|------| |wikitext| 1|word_perplexity|11.3275| | | | | |byte_perplexity| 1.5744| | | | | |bits_per_byte | 0.6548| | | [MPT-30B-Instruct (4-bit 128-group AWQ)](https://huggingface.co/abhinavkulkarni/mosaicml-mpt-30b-instruct-w4-g128-awq) | Task |Version| Metric | Value | |Stderr| |--------|------:|---------------|------:|---|------| |wikitext| 1|word_perplexity|11.6058| | | | | |byte_perplexity| 1.5816| | | | | |bits_per_byte | 0.6614| | | ## Acknowledgements The MPT model was originally finetuned by Sam Havens and the MosaicML NLP team. Please cite this model using the following format: ``` @online{MosaicML2023Introducing, author = {MosaicML NLP Team}, title = {Introducing MPT-30B: A New Standard for Open-Source, Commercially Usable LLMs}, year = {2023}, url = {www.mosaicml.com/blog/mpt-30b}, note = {Accessed: 2023-03-28}, % change this date urldate = {2023-03-28} % change this date } ``` The model was quantized with AWQ technique. If you find AWQ useful or relevant to your research, please kindly cite the paper: ``` @article{lin2023awq, title={AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration}, author={Lin, Ji and Tang, Jiaming and Tang, Haotian and Yang, Shang and Dang, Xingyu and Han, Song}, journal={arXiv}, year={2023} } ```
IvanV21/Llama-3.2-3b-it-mental-health
IvanV21
"2025-03-13T00:09:28Z"
0
0
transformers
[ "transformers", "safetensors", "gguf", "llama", "text-generation-inference", "unsloth", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-03-12T23:55:41Z"
--- base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** IvanV21 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
mradermacher/SOLAR-Platypus-10.7B-v1-i1-GGUF
mradermacher
"2024-11-29T07:29:17Z"
203
1
transformers
[ "transformers", "gguf", "en", "dataset:kyujinpy/Open-platypus-Commercial", "base_model:kyujinpy/SOLAR-Platypus-10.7B-v1", "base_model:quantized:kyujinpy/SOLAR-Platypus-10.7B-v1", "license:cc-by-nc-sa-4.0", "endpoints_compatible", "region:us", "imatrix" ]
null
"2024-11-28T18:17:11Z"
--- base_model: kyujinpy/SOLAR-Platypus-10.7B-v1 datasets: - kyujinpy/Open-platypus-Commercial language: - en library_name: transformers license: cc-by-nc-sa-4.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/kyujinpy/SOLAR-Platypus-10.7B-v1 <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/SOLAR-Platypus-10.7B-v1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/SOLAR-Platypus-10.7B-v1-i1-GGUF/resolve/main/SOLAR-Platypus-10.7B-v1.i1-IQ1_S.gguf) | i1-IQ1_S | 2.5 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/SOLAR-Platypus-10.7B-v1-i1-GGUF/resolve/main/SOLAR-Platypus-10.7B-v1.i1-IQ1_M.gguf) | i1-IQ1_M | 2.7 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/SOLAR-Platypus-10.7B-v1-i1-GGUF/resolve/main/SOLAR-Platypus-10.7B-v1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/SOLAR-Platypus-10.7B-v1-i1-GGUF/resolve/main/SOLAR-Platypus-10.7B-v1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/SOLAR-Platypus-10.7B-v1-i1-GGUF/resolve/main/SOLAR-Platypus-10.7B-v1.i1-IQ2_S.gguf) | i1-IQ2_S | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/SOLAR-Platypus-10.7B-v1-i1-GGUF/resolve/main/SOLAR-Platypus-10.7B-v1.i1-IQ2_M.gguf) | i1-IQ2_M | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/SOLAR-Platypus-10.7B-v1-i1-GGUF/resolve/main/SOLAR-Platypus-10.7B-v1.i1-Q2_K.gguf) | i1-Q2_K | 4.1 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/SOLAR-Platypus-10.7B-v1-i1-GGUF/resolve/main/SOLAR-Platypus-10.7B-v1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 4.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/SOLAR-Platypus-10.7B-v1-i1-GGUF/resolve/main/SOLAR-Platypus-10.7B-v1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/SOLAR-Platypus-10.7B-v1-i1-GGUF/resolve/main/SOLAR-Platypus-10.7B-v1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 4.8 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/SOLAR-Platypus-10.7B-v1-i1-GGUF/resolve/main/SOLAR-Platypus-10.7B-v1.i1-IQ3_S.gguf) | i1-IQ3_S | 4.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/SOLAR-Platypus-10.7B-v1-i1-GGUF/resolve/main/SOLAR-Platypus-10.7B-v1.i1-IQ3_M.gguf) | i1-IQ3_M | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/SOLAR-Platypus-10.7B-v1-i1-GGUF/resolve/main/SOLAR-Platypus-10.7B-v1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 5.3 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/SOLAR-Platypus-10.7B-v1-i1-GGUF/resolve/main/SOLAR-Platypus-10.7B-v1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 5.8 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/SOLAR-Platypus-10.7B-v1-i1-GGUF/resolve/main/SOLAR-Platypus-10.7B-v1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 5.9 | | | [GGUF](https://huggingface.co/mradermacher/SOLAR-Platypus-10.7B-v1-i1-GGUF/resolve/main/SOLAR-Platypus-10.7B-v1.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 6.2 | fast on arm, low quality | | [GGUF](https://huggingface.co/mradermacher/SOLAR-Platypus-10.7B-v1-i1-GGUF/resolve/main/SOLAR-Platypus-10.7B-v1.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 6.2 | fast on arm+i8mm, low quality | | [GGUF](https://huggingface.co/mradermacher/SOLAR-Platypus-10.7B-v1-i1-GGUF/resolve/main/SOLAR-Platypus-10.7B-v1.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 6.2 | fast on arm+sve, low quality | | [GGUF](https://huggingface.co/mradermacher/SOLAR-Platypus-10.7B-v1-i1-GGUF/resolve/main/SOLAR-Platypus-10.7B-v1.i1-Q4_0.gguf) | i1-Q4_0 | 6.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/SOLAR-Platypus-10.7B-v1-i1-GGUF/resolve/main/SOLAR-Platypus-10.7B-v1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 6.2 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/SOLAR-Platypus-10.7B-v1-i1-GGUF/resolve/main/SOLAR-Platypus-10.7B-v1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 6.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/SOLAR-Platypus-10.7B-v1-i1-GGUF/resolve/main/SOLAR-Platypus-10.7B-v1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 7.5 | | | [GGUF](https://huggingface.co/mradermacher/SOLAR-Platypus-10.7B-v1-i1-GGUF/resolve/main/SOLAR-Platypus-10.7B-v1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 7.7 | | | [GGUF](https://huggingface.co/mradermacher/SOLAR-Platypus-10.7B-v1-i1-GGUF/resolve/main/SOLAR-Platypus-10.7B-v1.i1-Q6_K.gguf) | i1-Q6_K | 8.9 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
gdshaji/gd-ms-13k-v1
gdshaji
"2024-11-20T07:27:13Z"
5
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-11-20T07:22:39Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AiAF/KJV-LLM-Finetuned-V1.0
AiAF
"2025-02-18T09:52:50Z"
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "axolotl", "generated_from_trainer", "dataset:master_list_input_output.jsonl", "base_model:AiAF/KJV-LLM-Pretrained-V1.1", "base_model:finetune:AiAF/KJV-LLM-Pretrained-V1.1", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-02-18T05:14:02Z"
--- library_name: transformers license: apache-2.0 base_model: AiAF/KJV-LLM-Pretrained-V1.1 tags: - axolotl - generated_from_trainer datasets: - master_list_input_output.jsonl model-index: - name: KJV-LLM-Finetuned-V1.0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.6.0` ```yaml base_model: AiAF/KJV-LLM-Pretrained-V1.1 # optionally might have model_type or tokenizer_type model_type: MistralForCausalLM tokenizer_type: LlamaTokenizer # Automatically upload checkpoint and final model to HF hub_model_id: AiAF/KJV-LLM-Finetuned-V1.0 load_in_8bit: false load_in_4bit: false strict: false datasets: - path: master_list_input_output.jsonl type: input_output dataset_prepared_path: val_set_size: 0.05 output_dir: ./outputs/out/KJV-LLM-Finetuned-V1.0 sequence_len: 8192 sample_packing: true pad_to_sequence_len: true eval_sample_packing: false wandb_project: "LLM-Finetuning" wandb_entity: wandb_watch: "all" wandb_name: "KJV-LLM-Finetuned-V1.0" wandb_log_model: "false" gradient_accumulation_steps: 4 micro_batch_size: 2 num_epochs: 15 optimizer: adamw_bnb_8bit lr_scheduler: cosine learning_rate: 0.000005 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: true early_stopping_patience: resume_from_checkpoint: /workspace/axolotl/outputs/out/KJV-LLM-Finetuned-V1.0/checkpoint-70 local_rank: logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 10 evals_per_epoch: 1 eval_table_size: eval_max_new_tokens: 128 saves_per_epoch: 1 debug: deepspeed: weight_decay: 0.0 fsdp: fsdp_config: special_tokens: bos_token: "<s>" eos_token: "</s>" unk_token: "<unk>" ``` </details><br> # KJV-LLM-Finetuned-V1.0 This model is a fine-tuned version of [AiAF/KJV-LLM-Pretrained-V1.1](https://huggingface.co/AiAF/KJV-LLM-Pretrained-V1.1) on the master_list_input_output.jsonl dataset. It achieves the following results on the evaluation set: - Loss: 0.6425 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.6027 | 0.1429 | 1 | 0.6864 | | 0.5207 | 1.0 | 7 | 0.5321 | | 0.3712 | 2.0 | 14 | 0.4974 | | 0.2916 | 3.0 | 21 | 0.5071 | | 0.2532 | 4.0 | 28 | 0.5065 | | 0.2176 | 5.0 | 35 | 0.5437 | | 0.1593 | 6.0 | 42 | 0.5660 | | 0.1389 | 7.0 | 49 | 0.5964 | | 0.127 | 8.0 | 56 | 0.6019 | | 0.1275 | 9.0 | 63 | 0.6039 | | 0.1261 | 10.0 | 70 | 0.6039 | | 0.1141 | 11.0 | 77 | 0.6303 | | 0.1095 | 12.0 | 84 | 0.6369 | | 0.1121 | 13.0 | 91 | 0.6410 | | 0.0985 | 14.0 | 98 | 0.6423 | | 0.113 | 15.0 | 105 | 0.6425 | ### Framework versions - Transformers 4.48.3 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
jonatasgrosman/exp_w2v2r_fr_vp-100k_gender_male-0_female-10_s934
jonatasgrosman
"2022-07-25T11:31:25Z"
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "fr", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2022-07-25T11:31:13Z"
--- language: - fr license: apache-2.0 tags: - automatic-speech-recognition - fr datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_fr_vp-100k_gender_male-0_female-10_s934 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
gaokaobishuati/ppo-LunarLander-gae0.99
gaokaobishuati
"2023-02-20T06:51:36Z"
0
0
null
[ "tensorboard", "LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
"2023-02-20T06:46:25Z"
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -198.07 +/- 103.97 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'exp_name': 'ppo' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'LunarLander-v2' 'total_timesteps': 50000 'learning_rate': 0.00025 'num_envs': 4 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.99 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'gaokaobishuati/ppo-LunarLander-gae0.99' 'batch_size': 512 'minibatch_size': 128} ```
gglabs/EEVE-light-0-epoch
gglabs
"2024-06-09T14:35:16Z"
5
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "base_model:quantized:yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
"2024-06-09T14:09:21Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - gguf base_model: yanolja/EEVE-Korean-Instruct-10.8B-v1.0 --- # Uploaded model - **Developed by:** gglabs - **License:** apache-2.0 - **Finetuned from model :** yanolja/EEVE-Korean-Instruct-10.8B-v1.0 This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
CyberHarem/tsukimi_eiko_paripikoumei
CyberHarem
"2023-09-18T20:28:49Z"
0
0
null
[ "art", "text-to-image", "dataset:CyberHarem/tsukimi_eiko_paripikoumei", "license:mit", "region:us" ]
text-to-image
"2023-09-18T20:07:26Z"
--- license: mit datasets: - CyberHarem/tsukimi_eiko_paripikoumei pipeline_tag: text-to-image tags: - art --- # Lora of tsukimi_eiko_paripikoumei This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs). The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11). After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora. For example, if you want to use the model from step 10800, you need to download `10800/tsukimi_eiko_paripikoumei.pt` as the embedding and `10800/tsukimi_eiko_paripikoumei.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters. **The best step we recommend is 10800**, with the score of 0.878. The trigger words are: 1. `tsukimi_eiko_paripikoumei` 2. `blonde_hair, long_hair, braid, twin_braids, hat, baseball_cap, bangs, blue_eyes, blunt_bangs, black_headwear, open_mouth` For the following groups, it is not recommended to use this model and we express regret: 1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail. 2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits. 3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm. 4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters. 5. Individuals who finds the generated image content offensive to their values. These are available steps: | Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | pattern_14 | pattern_15 | pattern_16 | pattern_17 | pattern_18 | pattern_19 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata | |:----------|:----------|:----------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:-------------------------------------------|:---------------------------------------------------|:---------------------------------------|:---------------------------------------|:---------------------------------------|:------------------------------------------------|:-------------------------------------------------|:---------------------------------------|:-------------------------------------------| | **10800** | **0.878** | [**Download**](10800/tsukimi_eiko_paripikoumei.zip) | ![pattern_1-10800](10800/previews/pattern_1.png) | ![pattern_2-10800](10800/previews/pattern_2.png) | ![pattern_3-10800](10800/previews/pattern_3.png) | ![pattern_4-10800](10800/previews/pattern_4.png) | ![pattern_5-10800](10800/previews/pattern_5.png) | ![pattern_6-10800](10800/previews/pattern_6.png) | ![pattern_7-10800](10800/previews/pattern_7.png) | ![pattern_8-10800](10800/previews/pattern_8.png) | ![pattern_9-10800](10800/previews/pattern_9.png) | ![pattern_10-10800](10800/previews/pattern_10.png) | ![pattern_11-10800](10800/previews/pattern_11.png) | ![pattern_12-10800](10800/previews/pattern_12.png) | ![pattern_13-10800](10800/previews/pattern_13.png) | ![pattern_14-10800](10800/previews/pattern_14.png) | ![pattern_15-10800](10800/previews/pattern_15.png) | ![pattern_16-10800](10800/previews/pattern_16.png) | ![pattern_17-10800](10800/previews/pattern_17.png) | ![pattern_18-10800](10800/previews/pattern_18.png) | ![pattern_19-10800](10800/previews/pattern_19.png) | ![bikini-10800](10800/previews/bikini.png) | [<NSFW, click to see>](10800/previews/bondage.png) | ![free-10800](10800/previews/free.png) | ![maid-10800](10800/previews/maid.png) | ![miko-10800](10800/previews/miko.png) | [<NSFW, click to see>](10800/previews/nude.png) | [<NSFW, click to see>](10800/previews/nude2.png) | ![suit-10800](10800/previews/suit.png) | ![yukata-10800](10800/previews/yukata.png) | | 10080 | 0.865 | [Download](10080/tsukimi_eiko_paripikoumei.zip) | ![pattern_1-10080](10080/previews/pattern_1.png) | ![pattern_2-10080](10080/previews/pattern_2.png) | ![pattern_3-10080](10080/previews/pattern_3.png) | ![pattern_4-10080](10080/previews/pattern_4.png) | ![pattern_5-10080](10080/previews/pattern_5.png) | ![pattern_6-10080](10080/previews/pattern_6.png) | ![pattern_7-10080](10080/previews/pattern_7.png) | ![pattern_8-10080](10080/previews/pattern_8.png) | ![pattern_9-10080](10080/previews/pattern_9.png) | ![pattern_10-10080](10080/previews/pattern_10.png) | ![pattern_11-10080](10080/previews/pattern_11.png) | ![pattern_12-10080](10080/previews/pattern_12.png) | ![pattern_13-10080](10080/previews/pattern_13.png) | ![pattern_14-10080](10080/previews/pattern_14.png) | ![pattern_15-10080](10080/previews/pattern_15.png) | ![pattern_16-10080](10080/previews/pattern_16.png) | ![pattern_17-10080](10080/previews/pattern_17.png) | ![pattern_18-10080](10080/previews/pattern_18.png) | ![pattern_19-10080](10080/previews/pattern_19.png) | ![bikini-10080](10080/previews/bikini.png) | [<NSFW, click to see>](10080/previews/bondage.png) | ![free-10080](10080/previews/free.png) | ![maid-10080](10080/previews/maid.png) | ![miko-10080](10080/previews/miko.png) | [<NSFW, click to see>](10080/previews/nude.png) | [<NSFW, click to see>](10080/previews/nude2.png) | ![suit-10080](10080/previews/suit.png) | ![yukata-10080](10080/previews/yukata.png) | | 9360 | 0.848 | [Download](9360/tsukimi_eiko_paripikoumei.zip) | ![pattern_1-9360](9360/previews/pattern_1.png) | ![pattern_2-9360](9360/previews/pattern_2.png) | ![pattern_3-9360](9360/previews/pattern_3.png) | ![pattern_4-9360](9360/previews/pattern_4.png) | ![pattern_5-9360](9360/previews/pattern_5.png) | ![pattern_6-9360](9360/previews/pattern_6.png) | ![pattern_7-9360](9360/previews/pattern_7.png) | ![pattern_8-9360](9360/previews/pattern_8.png) | ![pattern_9-9360](9360/previews/pattern_9.png) | ![pattern_10-9360](9360/previews/pattern_10.png) | ![pattern_11-9360](9360/previews/pattern_11.png) | ![pattern_12-9360](9360/previews/pattern_12.png) | ![pattern_13-9360](9360/previews/pattern_13.png) | ![pattern_14-9360](9360/previews/pattern_14.png) | ![pattern_15-9360](9360/previews/pattern_15.png) | ![pattern_16-9360](9360/previews/pattern_16.png) | ![pattern_17-9360](9360/previews/pattern_17.png) | ![pattern_18-9360](9360/previews/pattern_18.png) | ![pattern_19-9360](9360/previews/pattern_19.png) | ![bikini-9360](9360/previews/bikini.png) | [<NSFW, click to see>](9360/previews/bondage.png) | ![free-9360](9360/previews/free.png) | ![maid-9360](9360/previews/maid.png) | ![miko-9360](9360/previews/miko.png) | [<NSFW, click to see>](9360/previews/nude.png) | [<NSFW, click to see>](9360/previews/nude2.png) | ![suit-9360](9360/previews/suit.png) | ![yukata-9360](9360/previews/yukata.png) | | 8640 | 0.856 | [Download](8640/tsukimi_eiko_paripikoumei.zip) | ![pattern_1-8640](8640/previews/pattern_1.png) | ![pattern_2-8640](8640/previews/pattern_2.png) | ![pattern_3-8640](8640/previews/pattern_3.png) | ![pattern_4-8640](8640/previews/pattern_4.png) | ![pattern_5-8640](8640/previews/pattern_5.png) | ![pattern_6-8640](8640/previews/pattern_6.png) | ![pattern_7-8640](8640/previews/pattern_7.png) | ![pattern_8-8640](8640/previews/pattern_8.png) | ![pattern_9-8640](8640/previews/pattern_9.png) | ![pattern_10-8640](8640/previews/pattern_10.png) | ![pattern_11-8640](8640/previews/pattern_11.png) | ![pattern_12-8640](8640/previews/pattern_12.png) | ![pattern_13-8640](8640/previews/pattern_13.png) | ![pattern_14-8640](8640/previews/pattern_14.png) | ![pattern_15-8640](8640/previews/pattern_15.png) | ![pattern_16-8640](8640/previews/pattern_16.png) | ![pattern_17-8640](8640/previews/pattern_17.png) | ![pattern_18-8640](8640/previews/pattern_18.png) | ![pattern_19-8640](8640/previews/pattern_19.png) | ![bikini-8640](8640/previews/bikini.png) | [<NSFW, click to see>](8640/previews/bondage.png) | ![free-8640](8640/previews/free.png) | ![maid-8640](8640/previews/maid.png) | ![miko-8640](8640/previews/miko.png) | [<NSFW, click to see>](8640/previews/nude.png) | [<NSFW, click to see>](8640/previews/nude2.png) | ![suit-8640](8640/previews/suit.png) | ![yukata-8640](8640/previews/yukata.png) | | 7920 | 0.845 | [Download](7920/tsukimi_eiko_paripikoumei.zip) | ![pattern_1-7920](7920/previews/pattern_1.png) | ![pattern_2-7920](7920/previews/pattern_2.png) | ![pattern_3-7920](7920/previews/pattern_3.png) | ![pattern_4-7920](7920/previews/pattern_4.png) | ![pattern_5-7920](7920/previews/pattern_5.png) | ![pattern_6-7920](7920/previews/pattern_6.png) | ![pattern_7-7920](7920/previews/pattern_7.png) | ![pattern_8-7920](7920/previews/pattern_8.png) | ![pattern_9-7920](7920/previews/pattern_9.png) | ![pattern_10-7920](7920/previews/pattern_10.png) | ![pattern_11-7920](7920/previews/pattern_11.png) | ![pattern_12-7920](7920/previews/pattern_12.png) | ![pattern_13-7920](7920/previews/pattern_13.png) | ![pattern_14-7920](7920/previews/pattern_14.png) | ![pattern_15-7920](7920/previews/pattern_15.png) | ![pattern_16-7920](7920/previews/pattern_16.png) | ![pattern_17-7920](7920/previews/pattern_17.png) | ![pattern_18-7920](7920/previews/pattern_18.png) | ![pattern_19-7920](7920/previews/pattern_19.png) | ![bikini-7920](7920/previews/bikini.png) | [<NSFW, click to see>](7920/previews/bondage.png) | ![free-7920](7920/previews/free.png) | ![maid-7920](7920/previews/maid.png) | ![miko-7920](7920/previews/miko.png) | [<NSFW, click to see>](7920/previews/nude.png) | [<NSFW, click to see>](7920/previews/nude2.png) | ![suit-7920](7920/previews/suit.png) | ![yukata-7920](7920/previews/yukata.png) | | 7200 | 0.867 | [Download](7200/tsukimi_eiko_paripikoumei.zip) | ![pattern_1-7200](7200/previews/pattern_1.png) | ![pattern_2-7200](7200/previews/pattern_2.png) | ![pattern_3-7200](7200/previews/pattern_3.png) | ![pattern_4-7200](7200/previews/pattern_4.png) | ![pattern_5-7200](7200/previews/pattern_5.png) | ![pattern_6-7200](7200/previews/pattern_6.png) | ![pattern_7-7200](7200/previews/pattern_7.png) | ![pattern_8-7200](7200/previews/pattern_8.png) | ![pattern_9-7200](7200/previews/pattern_9.png) | ![pattern_10-7200](7200/previews/pattern_10.png) | ![pattern_11-7200](7200/previews/pattern_11.png) | ![pattern_12-7200](7200/previews/pattern_12.png) | ![pattern_13-7200](7200/previews/pattern_13.png) | ![pattern_14-7200](7200/previews/pattern_14.png) | ![pattern_15-7200](7200/previews/pattern_15.png) | ![pattern_16-7200](7200/previews/pattern_16.png) | ![pattern_17-7200](7200/previews/pattern_17.png) | ![pattern_18-7200](7200/previews/pattern_18.png) | ![pattern_19-7200](7200/previews/pattern_19.png) | ![bikini-7200](7200/previews/bikini.png) | [<NSFW, click to see>](7200/previews/bondage.png) | ![free-7200](7200/previews/free.png) | ![maid-7200](7200/previews/maid.png) | ![miko-7200](7200/previews/miko.png) | [<NSFW, click to see>](7200/previews/nude.png) | [<NSFW, click to see>](7200/previews/nude2.png) | ![suit-7200](7200/previews/suit.png) | ![yukata-7200](7200/previews/yukata.png) | | 6480 | 0.864 | [Download](6480/tsukimi_eiko_paripikoumei.zip) | ![pattern_1-6480](6480/previews/pattern_1.png) | ![pattern_2-6480](6480/previews/pattern_2.png) | ![pattern_3-6480](6480/previews/pattern_3.png) | ![pattern_4-6480](6480/previews/pattern_4.png) | ![pattern_5-6480](6480/previews/pattern_5.png) | ![pattern_6-6480](6480/previews/pattern_6.png) | ![pattern_7-6480](6480/previews/pattern_7.png) | ![pattern_8-6480](6480/previews/pattern_8.png) | ![pattern_9-6480](6480/previews/pattern_9.png) | ![pattern_10-6480](6480/previews/pattern_10.png) | ![pattern_11-6480](6480/previews/pattern_11.png) | ![pattern_12-6480](6480/previews/pattern_12.png) | ![pattern_13-6480](6480/previews/pattern_13.png) | ![pattern_14-6480](6480/previews/pattern_14.png) | ![pattern_15-6480](6480/previews/pattern_15.png) | ![pattern_16-6480](6480/previews/pattern_16.png) | ![pattern_17-6480](6480/previews/pattern_17.png) | ![pattern_18-6480](6480/previews/pattern_18.png) | ![pattern_19-6480](6480/previews/pattern_19.png) | ![bikini-6480](6480/previews/bikini.png) | [<NSFW, click to see>](6480/previews/bondage.png) | ![free-6480](6480/previews/free.png) | ![maid-6480](6480/previews/maid.png) | ![miko-6480](6480/previews/miko.png) | [<NSFW, click to see>](6480/previews/nude.png) | [<NSFW, click to see>](6480/previews/nude2.png) | ![suit-6480](6480/previews/suit.png) | ![yukata-6480](6480/previews/yukata.png) | | 5760 | 0.860 | [Download](5760/tsukimi_eiko_paripikoumei.zip) | ![pattern_1-5760](5760/previews/pattern_1.png) | ![pattern_2-5760](5760/previews/pattern_2.png) | ![pattern_3-5760](5760/previews/pattern_3.png) | ![pattern_4-5760](5760/previews/pattern_4.png) | ![pattern_5-5760](5760/previews/pattern_5.png) | ![pattern_6-5760](5760/previews/pattern_6.png) | ![pattern_7-5760](5760/previews/pattern_7.png) | ![pattern_8-5760](5760/previews/pattern_8.png) | ![pattern_9-5760](5760/previews/pattern_9.png) | ![pattern_10-5760](5760/previews/pattern_10.png) | ![pattern_11-5760](5760/previews/pattern_11.png) | ![pattern_12-5760](5760/previews/pattern_12.png) | ![pattern_13-5760](5760/previews/pattern_13.png) | ![pattern_14-5760](5760/previews/pattern_14.png) | ![pattern_15-5760](5760/previews/pattern_15.png) | ![pattern_16-5760](5760/previews/pattern_16.png) | ![pattern_17-5760](5760/previews/pattern_17.png) | ![pattern_18-5760](5760/previews/pattern_18.png) | ![pattern_19-5760](5760/previews/pattern_19.png) | ![bikini-5760](5760/previews/bikini.png) | [<NSFW, click to see>](5760/previews/bondage.png) | ![free-5760](5760/previews/free.png) | ![maid-5760](5760/previews/maid.png) | ![miko-5760](5760/previews/miko.png) | [<NSFW, click to see>](5760/previews/nude.png) | [<NSFW, click to see>](5760/previews/nude2.png) | ![suit-5760](5760/previews/suit.png) | ![yukata-5760](5760/previews/yukata.png) | | 5040 | 0.827 | [Download](5040/tsukimi_eiko_paripikoumei.zip) | ![pattern_1-5040](5040/previews/pattern_1.png) | ![pattern_2-5040](5040/previews/pattern_2.png) | ![pattern_3-5040](5040/previews/pattern_3.png) | ![pattern_4-5040](5040/previews/pattern_4.png) | ![pattern_5-5040](5040/previews/pattern_5.png) | ![pattern_6-5040](5040/previews/pattern_6.png) | ![pattern_7-5040](5040/previews/pattern_7.png) | ![pattern_8-5040](5040/previews/pattern_8.png) | ![pattern_9-5040](5040/previews/pattern_9.png) | ![pattern_10-5040](5040/previews/pattern_10.png) | ![pattern_11-5040](5040/previews/pattern_11.png) | ![pattern_12-5040](5040/previews/pattern_12.png) | ![pattern_13-5040](5040/previews/pattern_13.png) | ![pattern_14-5040](5040/previews/pattern_14.png) | ![pattern_15-5040](5040/previews/pattern_15.png) | ![pattern_16-5040](5040/previews/pattern_16.png) | ![pattern_17-5040](5040/previews/pattern_17.png) | ![pattern_18-5040](5040/previews/pattern_18.png) | ![pattern_19-5040](5040/previews/pattern_19.png) | ![bikini-5040](5040/previews/bikini.png) | [<NSFW, click to see>](5040/previews/bondage.png) | ![free-5040](5040/previews/free.png) | ![maid-5040](5040/previews/maid.png) | ![miko-5040](5040/previews/miko.png) | [<NSFW, click to see>](5040/previews/nude.png) | [<NSFW, click to see>](5040/previews/nude2.png) | ![suit-5040](5040/previews/suit.png) | ![yukata-5040](5040/previews/yukata.png) | | 4320 | 0.834 | [Download](4320/tsukimi_eiko_paripikoumei.zip) | ![pattern_1-4320](4320/previews/pattern_1.png) | ![pattern_2-4320](4320/previews/pattern_2.png) | ![pattern_3-4320](4320/previews/pattern_3.png) | ![pattern_4-4320](4320/previews/pattern_4.png) | ![pattern_5-4320](4320/previews/pattern_5.png) | ![pattern_6-4320](4320/previews/pattern_6.png) | ![pattern_7-4320](4320/previews/pattern_7.png) | ![pattern_8-4320](4320/previews/pattern_8.png) | ![pattern_9-4320](4320/previews/pattern_9.png) | ![pattern_10-4320](4320/previews/pattern_10.png) | ![pattern_11-4320](4320/previews/pattern_11.png) | ![pattern_12-4320](4320/previews/pattern_12.png) | ![pattern_13-4320](4320/previews/pattern_13.png) | ![pattern_14-4320](4320/previews/pattern_14.png) | ![pattern_15-4320](4320/previews/pattern_15.png) | ![pattern_16-4320](4320/previews/pattern_16.png) | ![pattern_17-4320](4320/previews/pattern_17.png) | ![pattern_18-4320](4320/previews/pattern_18.png) | ![pattern_19-4320](4320/previews/pattern_19.png) | ![bikini-4320](4320/previews/bikini.png) | [<NSFW, click to see>](4320/previews/bondage.png) | ![free-4320](4320/previews/free.png) | ![maid-4320](4320/previews/maid.png) | ![miko-4320](4320/previews/miko.png) | [<NSFW, click to see>](4320/previews/nude.png) | [<NSFW, click to see>](4320/previews/nude2.png) | ![suit-4320](4320/previews/suit.png) | ![yukata-4320](4320/previews/yukata.png) | | 3600 | 0.810 | [Download](3600/tsukimi_eiko_paripikoumei.zip) | ![pattern_1-3600](3600/previews/pattern_1.png) | ![pattern_2-3600](3600/previews/pattern_2.png) | ![pattern_3-3600](3600/previews/pattern_3.png) | ![pattern_4-3600](3600/previews/pattern_4.png) | ![pattern_5-3600](3600/previews/pattern_5.png) | ![pattern_6-3600](3600/previews/pattern_6.png) | ![pattern_7-3600](3600/previews/pattern_7.png) | ![pattern_8-3600](3600/previews/pattern_8.png) | ![pattern_9-3600](3600/previews/pattern_9.png) | ![pattern_10-3600](3600/previews/pattern_10.png) | ![pattern_11-3600](3600/previews/pattern_11.png) | ![pattern_12-3600](3600/previews/pattern_12.png) | ![pattern_13-3600](3600/previews/pattern_13.png) | ![pattern_14-3600](3600/previews/pattern_14.png) | ![pattern_15-3600](3600/previews/pattern_15.png) | ![pattern_16-3600](3600/previews/pattern_16.png) | ![pattern_17-3600](3600/previews/pattern_17.png) | ![pattern_18-3600](3600/previews/pattern_18.png) | ![pattern_19-3600](3600/previews/pattern_19.png) | ![bikini-3600](3600/previews/bikini.png) | [<NSFW, click to see>](3600/previews/bondage.png) | ![free-3600](3600/previews/free.png) | ![maid-3600](3600/previews/maid.png) | ![miko-3600](3600/previews/miko.png) | [<NSFW, click to see>](3600/previews/nude.png) | [<NSFW, click to see>](3600/previews/nude2.png) | ![suit-3600](3600/previews/suit.png) | ![yukata-3600](3600/previews/yukata.png) | | 2880 | 0.812 | [Download](2880/tsukimi_eiko_paripikoumei.zip) | ![pattern_1-2880](2880/previews/pattern_1.png) | ![pattern_2-2880](2880/previews/pattern_2.png) | ![pattern_3-2880](2880/previews/pattern_3.png) | ![pattern_4-2880](2880/previews/pattern_4.png) | ![pattern_5-2880](2880/previews/pattern_5.png) | ![pattern_6-2880](2880/previews/pattern_6.png) | ![pattern_7-2880](2880/previews/pattern_7.png) | ![pattern_8-2880](2880/previews/pattern_8.png) | ![pattern_9-2880](2880/previews/pattern_9.png) | ![pattern_10-2880](2880/previews/pattern_10.png) | ![pattern_11-2880](2880/previews/pattern_11.png) | ![pattern_12-2880](2880/previews/pattern_12.png) | ![pattern_13-2880](2880/previews/pattern_13.png) | ![pattern_14-2880](2880/previews/pattern_14.png) | ![pattern_15-2880](2880/previews/pattern_15.png) | ![pattern_16-2880](2880/previews/pattern_16.png) | ![pattern_17-2880](2880/previews/pattern_17.png) | ![pattern_18-2880](2880/previews/pattern_18.png) | ![pattern_19-2880](2880/previews/pattern_19.png) | ![bikini-2880](2880/previews/bikini.png) | [<NSFW, click to see>](2880/previews/bondage.png) | ![free-2880](2880/previews/free.png) | ![maid-2880](2880/previews/maid.png) | ![miko-2880](2880/previews/miko.png) | [<NSFW, click to see>](2880/previews/nude.png) | [<NSFW, click to see>](2880/previews/nude2.png) | ![suit-2880](2880/previews/suit.png) | ![yukata-2880](2880/previews/yukata.png) | | 2160 | 0.832 | [Download](2160/tsukimi_eiko_paripikoumei.zip) | ![pattern_1-2160](2160/previews/pattern_1.png) | ![pattern_2-2160](2160/previews/pattern_2.png) | ![pattern_3-2160](2160/previews/pattern_3.png) | ![pattern_4-2160](2160/previews/pattern_4.png) | ![pattern_5-2160](2160/previews/pattern_5.png) | ![pattern_6-2160](2160/previews/pattern_6.png) | ![pattern_7-2160](2160/previews/pattern_7.png) | ![pattern_8-2160](2160/previews/pattern_8.png) | ![pattern_9-2160](2160/previews/pattern_9.png) | ![pattern_10-2160](2160/previews/pattern_10.png) | ![pattern_11-2160](2160/previews/pattern_11.png) | ![pattern_12-2160](2160/previews/pattern_12.png) | ![pattern_13-2160](2160/previews/pattern_13.png) | ![pattern_14-2160](2160/previews/pattern_14.png) | ![pattern_15-2160](2160/previews/pattern_15.png) | ![pattern_16-2160](2160/previews/pattern_16.png) | ![pattern_17-2160](2160/previews/pattern_17.png) | ![pattern_18-2160](2160/previews/pattern_18.png) | ![pattern_19-2160](2160/previews/pattern_19.png) | ![bikini-2160](2160/previews/bikini.png) | [<NSFW, click to see>](2160/previews/bondage.png) | ![free-2160](2160/previews/free.png) | ![maid-2160](2160/previews/maid.png) | ![miko-2160](2160/previews/miko.png) | [<NSFW, click to see>](2160/previews/nude.png) | [<NSFW, click to see>](2160/previews/nude2.png) | ![suit-2160](2160/previews/suit.png) | ![yukata-2160](2160/previews/yukata.png) | | 1440 | 0.740 | [Download](1440/tsukimi_eiko_paripikoumei.zip) | ![pattern_1-1440](1440/previews/pattern_1.png) | ![pattern_2-1440](1440/previews/pattern_2.png) | ![pattern_3-1440](1440/previews/pattern_3.png) | ![pattern_4-1440](1440/previews/pattern_4.png) | ![pattern_5-1440](1440/previews/pattern_5.png) | ![pattern_6-1440](1440/previews/pattern_6.png) | ![pattern_7-1440](1440/previews/pattern_7.png) | ![pattern_8-1440](1440/previews/pattern_8.png) | ![pattern_9-1440](1440/previews/pattern_9.png) | ![pattern_10-1440](1440/previews/pattern_10.png) | ![pattern_11-1440](1440/previews/pattern_11.png) | ![pattern_12-1440](1440/previews/pattern_12.png) | ![pattern_13-1440](1440/previews/pattern_13.png) | ![pattern_14-1440](1440/previews/pattern_14.png) | ![pattern_15-1440](1440/previews/pattern_15.png) | ![pattern_16-1440](1440/previews/pattern_16.png) | ![pattern_17-1440](1440/previews/pattern_17.png) | ![pattern_18-1440](1440/previews/pattern_18.png) | ![pattern_19-1440](1440/previews/pattern_19.png) | ![bikini-1440](1440/previews/bikini.png) | [<NSFW, click to see>](1440/previews/bondage.png) | ![free-1440](1440/previews/free.png) | ![maid-1440](1440/previews/maid.png) | ![miko-1440](1440/previews/miko.png) | [<NSFW, click to see>](1440/previews/nude.png) | [<NSFW, click to see>](1440/previews/nude2.png) | ![suit-1440](1440/previews/suit.png) | ![yukata-1440](1440/previews/yukata.png) | | 720 | 0.700 | [Download](720/tsukimi_eiko_paripikoumei.zip) | ![pattern_1-720](720/previews/pattern_1.png) | ![pattern_2-720](720/previews/pattern_2.png) | ![pattern_3-720](720/previews/pattern_3.png) | ![pattern_4-720](720/previews/pattern_4.png) | ![pattern_5-720](720/previews/pattern_5.png) | ![pattern_6-720](720/previews/pattern_6.png) | ![pattern_7-720](720/previews/pattern_7.png) | ![pattern_8-720](720/previews/pattern_8.png) | ![pattern_9-720](720/previews/pattern_9.png) | ![pattern_10-720](720/previews/pattern_10.png) | ![pattern_11-720](720/previews/pattern_11.png) | ![pattern_12-720](720/previews/pattern_12.png) | ![pattern_13-720](720/previews/pattern_13.png) | ![pattern_14-720](720/previews/pattern_14.png) | ![pattern_15-720](720/previews/pattern_15.png) | ![pattern_16-720](720/previews/pattern_16.png) | ![pattern_17-720](720/previews/pattern_17.png) | ![pattern_18-720](720/previews/pattern_18.png) | ![pattern_19-720](720/previews/pattern_19.png) | ![bikini-720](720/previews/bikini.png) | [<NSFW, click to see>](720/previews/bondage.png) | ![free-720](720/previews/free.png) | ![maid-720](720/previews/maid.png) | ![miko-720](720/previews/miko.png) | [<NSFW, click to see>](720/previews/nude.png) | [<NSFW, click to see>](720/previews/nude2.png) | ![suit-720](720/previews/suit.png) | ![yukata-720](720/previews/yukata.png) |
vocabtrimmer/xlm-roberta-base-trimmed-es-5000-tweet-sentiment-es
vocabtrimmer
"2023-03-28T03:28:09Z"
115
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2023-03-20T01:25:19Z"
# `vocabtrimmer/xlm-roberta-base-trimmed-es-5000-tweet-sentiment-es` This model is a fine-tuned version of [/home/asahi/lm-vocab-trimmer/ckpts/xlm-roberta-base-trimmed-es-5000](https://huggingface.co//home/asahi/lm-vocab-trimmer/ckpts/xlm-roberta-base-trimmed-es-5000) on the [cardiffnlp/tweet_sentiment_multilingual](https://huggingface.co/datasets/cardiffnlp/tweet_sentiment_multilingual) (spanish). Following metrics are computed on the `test` split of [cardiffnlp/tweet_sentiment_multilingual](https://huggingface.co/datasets/cardiffnlp/tweet_sentiment_multilingual)(spanish). | | eval_f1_micro | eval_recall_micro | eval_precision_micro | eval_f1_macro | eval_recall_macro | eval_precision_macro | eval_accuracy | |---:|----------------:|--------------------:|-----------------------:|----------------:|--------------------:|-----------------------:|----------------:| | 0 | 61.61 | 61.61 | 61.61 | 60.38 | 61.61 | 61.51 | 61.61 | Check the result file [here](https://huggingface.co/vocabtrimmer/xlm-roberta-base-trimmed-es-5000-tweet-sentiment-es/raw/main/eval.json).
Smuggling1710/H4na-7B-v0.1
Smuggling1710
"2024-04-07T02:17:50Z"
4
0
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "en", "base_model:unsloth/mistral-7b-v0.2-bnb-4bit", "base_model:finetune:unsloth/mistral-7b-v0.2-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-04-07T02:10:36Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl - sft base_model: unsloth/mistral-7b-v0.2-bnb-4bit --- # Uploaded model - **Developed by:** Smuggling1710 - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-v0.2-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
fbaldassarri/openlm-research_open_llama_3b_v2-autoround-int4-gs64-asym
fbaldassarri
"2025-04-01T21:28:15Z"
0
0
null
[ "safetensors", "llama", "pytorch", "causal-lm", "OpenLLaMA", "autoround", "auto-round", "intel-autoround", "gptq", "woq", "intel", "openlm-research", "text-generation", "dataset:tiiuae/falcon-refinedweb", "dataset:bigcode/starcoderdata", "dataset:togethercomputer/RedPajama-Data-1T", "base_model:openlm-research/open_llama_3b_v2", "base_model:quantized:openlm-research/open_llama_3b_v2", "license:apache-2.0", "4-bit", "intel/auto-round", "region:us" ]
text-generation
"2025-04-01T21:27:21Z"
--- tags: - pytorch - causal-lm - OpenLLaMA - autoround - auto-round - intel-autoround - gptq - woq - intel - pytorch - openlm-research license: apache-2.0 datasets: - tiiuae/falcon-refinedweb - bigcode/starcoderdata - togethercomputer/RedPajama-Data-1T model_name: OpenLLaMA 3B v2 base_model: - openlm-research/open_llama_3b_v2 inference: false model_creator: openlm-research pipeline_tag: text-generation prompt_template: '{prompt} ' quantized_by: fbaldassarri --- ## Model Information Quantized version of [openlm-research/open_llama_3b_v2](https://huggingface.co/openlm-research/open_llama_3b_v2) using torch.float32 for quantization tuning. - 4 bits (INT4) - group size = 64 - Asymmetrical Quantization - Method WoQ (AutoRound format) Fast and low memory, 2-3X speedup (slight accuracy drop at W4G64) Quantization framework: [Intel AutoRound](https://github.com/intel/auto-round) v0.4.6 Note: this INT4 version of open_llama_3b_v2 has been quantized to run inference through CPU. ## Replication Recipe ### Step 1 Install Requirements I suggest to install requirements into a dedicated python-virtualenv or a conda enviroment. ``` wget https://github.com/intel/auto-round/archive/refs/tags/v0.4.6.tar.gz tar -xvzf v0.4.6.tar.gz cd auto-round-0.4.6 pip install -r requirements-cpu.txt --upgrade ``` ### Step 2 Build Intel AutoRound wheel from sources ``` pip install -vvv --no-build-isolation -e .[cpu] ``` ### Step 3 Script for Quantization ``` from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "openlm-research/open_llama_3b_v2" model = AutoModelForCausalLM.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) from auto_round import AutoRound bits, group_size, sym, device, amp = 4, 64, False, 'cpu', False autoround = AutoRound(model, tokenizer, nsamples=128, iters=200, seqlen=512, batch_size=4, bits=bits, group_size=group_size, sym=sym, device=device, amp=amp) autoround.quantize() output_dir = "./AutoRound/openlm-research_open_llama_3b_v2-autoround-int4-gs64-asym" autoround.save_quantized(output_dir, format='auto_round', inplace=True) ``` ## License [Apache 2.0 License](https://choosealicense.com/licenses/apache-2.0/) ## Disclaimer This quantized model comes with no warranty. It has been developed only for research purposes.
furrutiav/neobert_mixtral_nllfg_rubric_sst2_sentence_embd_perplexity
furrutiav
"2025-03-18T19:20:30Z"
0
0
transformers
[ "transformers", "safetensors", "neobert", "feature-extraction", "custom_code", "arxiv:1910.09700", "region:us" ]
feature-extraction
"2025-03-18T19:19:37Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ninja/Sentiment_Analysis
ninja
"2024-07-05T13:21:34Z"
70
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-07-05T08:23:53Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]