modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-02 12:28:20
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
462 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-02 12:26:48
card
stringlengths
11
1.01M
LaghaRoua/lora_model
LaghaRoua
2025-06-01T11:17:56Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-01T11:17:47Z
--- base_model: unsloth/llama-3-8b-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** LaghaRoua - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
vmerinoc/openai-whisper-medium-lora-colab
vmerinoc
2025-06-01T11:17:50Z
3
0
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "arxiv:1910.09700", "endpoints_compatible", "8-bit", "bitsandbytes", "region:us" ]
automatic-speech-recognition
2025-05-14T13:59:47Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
yobellee/ppo-Huggy
yobellee
2025-06-01T11:16:29Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2025-06-01T11:16:24Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: yobellee/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Kanishkaran/Banking
Kanishkaran
2025-06-01T11:14:22Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "bitsandbytes", "region:us" ]
text-generation
2025-06-01T11:11:03Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
arka08652/orpo_trained_advertise-v0.2
arka08652
2025-06-01T11:13:10Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:unsloth/mistral-7b-instruct-v0.3-bnb-4bit", "base_model:finetune:unsloth/mistral-7b-instruct-v0.3-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-01T11:13:01Z
--- base_model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit tags: - text-generation-inference - transformers - unsloth - mistral - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** arka08652 - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.3-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
mradermacher/RewardAnything-8B-v1-GGUF
mradermacher
2025-06-01T11:09:20Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:zhuohaoyu/RewardAnything-8B-v1", "base_model:quantized:zhuohaoyu/RewardAnything-8B-v1", "endpoints_compatible", "region:us", "conversational" ]
null
2025-06-01T10:27:22Z
--- base_model: zhuohaoyu/RewardAnything-8B-v1 language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/zhuohaoyu/RewardAnything-8B-v1 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/RewardAnything-8B-v1-GGUF/resolve/main/RewardAnything-8B-v1.Q2_K.gguf) | Q2_K | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/RewardAnything-8B-v1-GGUF/resolve/main/RewardAnything-8B-v1.Q3_K_S.gguf) | Q3_K_S | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/RewardAnything-8B-v1-GGUF/resolve/main/RewardAnything-8B-v1.Q3_K_M.gguf) | Q3_K_M | 4.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/RewardAnything-8B-v1-GGUF/resolve/main/RewardAnything-8B-v1.Q3_K_L.gguf) | Q3_K_L | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/RewardAnything-8B-v1-GGUF/resolve/main/RewardAnything-8B-v1.IQ4_XS.gguf) | IQ4_XS | 4.7 | | | [GGUF](https://huggingface.co/mradermacher/RewardAnything-8B-v1-GGUF/resolve/main/RewardAnything-8B-v1.Q4_K_S.gguf) | Q4_K_S | 4.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/RewardAnything-8B-v1-GGUF/resolve/main/RewardAnything-8B-v1.Q4_K_M.gguf) | Q4_K_M | 5.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/RewardAnything-8B-v1-GGUF/resolve/main/RewardAnything-8B-v1.Q5_K_S.gguf) | Q5_K_S | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/RewardAnything-8B-v1-GGUF/resolve/main/RewardAnything-8B-v1.Q5_K_M.gguf) | Q5_K_M | 6.0 | | | [GGUF](https://huggingface.co/mradermacher/RewardAnything-8B-v1-GGUF/resolve/main/RewardAnything-8B-v1.Q6_K.gguf) | Q6_K | 6.8 | very good quality | | [GGUF](https://huggingface.co/mradermacher/RewardAnything-8B-v1-GGUF/resolve/main/RewardAnything-8B-v1.Q8_0.gguf) | Q8_0 | 8.8 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/RewardAnything-8B-v1-GGUF/resolve/main/RewardAnything-8B-v1.f16.gguf) | f16 | 16.5 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
thoddnn/colqwen2.5-v0.2-mlx-8bit-test
thoddnn
2025-06-01T11:08:15Z
0
0
mlx
[ "mlx", "safetensors", "colqwen2_5", "license:apache-2.0", "region:us" ]
null
2025-06-01T11:05:27Z
--- license: apache-2.0 tags: - mlx --- # thoddnn/colqwen2.5-v0.2-mlx-8bit-test The Model [thoddnn/colqwen2.5-v0.2-mlx-8bit-test](https://huggingface.co/thoddnn/colqwen2.5-v0.2-mlx-8bit-test) was converted to MLX format from [thoddnn/colqwen2.5-v0.2-mlx](https://huggingface.co/thoddnn/colqwen2.5-v0.2-mlx) using mlx-lm version **0.0.3**. ## Use with mlx ```bash pip install mlx-embeddings ``` ```python from mlx_embeddings import load, generate import mlx.core as mx model, tokenizer = load("thoddnn/colqwen2.5-v0.2-mlx-8bit-test") # For text embeddings output = generate(model, processor, texts=["I like grapes", "I like fruits"]) embeddings = output.text_embeds # Normalized embeddings # Compute dot product between normalized embeddings similarity_matrix = mx.matmul(embeddings, embeddings.T) print("Similarity matrix between texts:") print(similarity_matrix) ```
falcongoldman/nexusai-llm
falcongoldman
2025-06-01T11:00:36Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "gemma3", "trl", "en", "base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit", "base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-01T11:00:24Z
--- base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - gemma3 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** falcongoldman - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
mradermacher/EraX-VL-7B-V1.5-GGUF
mradermacher
2025-06-01T11:00:19Z
221
1
transformers
[ "transformers", "gguf", "erax", "multimodal", "erax-vl-7B", "insurance", "ocr", "vietnamese", "bcg", "image-to-text", "vi", "en", "zh", "base_model:erax-ai/EraX-VL-7B-V1.5", "base_model:quantized:erax-ai/EraX-VL-7B-V1.5", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
image-to-text
2025-01-01T08:38:24Z
--- base_model: erax-ai/EraX-VL-7B-V1.5 language: - vi - en - zh library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - erax - multimodal - erax-vl-7B - insurance - ocr - vietnamese - bcg - image-to-text --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> static quants of https://huggingface.co/erax-ai/EraX-VL-7B-V1.5 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/EraX-VL-7B-V1.5-GGUF/resolve/main/EraX-VL-7B-V1.5.mmproj-fp16.gguf) | mmproj-fp16 | 1.5 | multi-modal supplement | | [GGUF](https://huggingface.co/mradermacher/EraX-VL-7B-V1.5-GGUF/resolve/main/EraX-VL-7B-V1.5.Q2_K.gguf) | Q2_K | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/EraX-VL-7B-V1.5-GGUF/resolve/main/EraX-VL-7B-V1.5.Q3_K_S.gguf) | Q3_K_S | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/EraX-VL-7B-V1.5-GGUF/resolve/main/EraX-VL-7B-V1.5.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/EraX-VL-7B-V1.5-GGUF/resolve/main/EraX-VL-7B-V1.5.Q3_K_L.gguf) | Q3_K_L | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/EraX-VL-7B-V1.5-GGUF/resolve/main/EraX-VL-7B-V1.5.IQ4_XS.gguf) | IQ4_XS | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/EraX-VL-7B-V1.5-GGUF/resolve/main/EraX-VL-7B-V1.5.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/EraX-VL-7B-V1.5-GGUF/resolve/main/EraX-VL-7B-V1.5.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/EraX-VL-7B-V1.5-GGUF/resolve/main/EraX-VL-7B-V1.5.Q5_K_S.gguf) | Q5_K_S | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/EraX-VL-7B-V1.5-GGUF/resolve/main/EraX-VL-7B-V1.5.Q5_K_M.gguf) | Q5_K_M | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/EraX-VL-7B-V1.5-GGUF/resolve/main/EraX-VL-7B-V1.5.Q6_K.gguf) | Q6_K | 6.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/EraX-VL-7B-V1.5-GGUF/resolve/main/EraX-VL-7B-V1.5.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/EraX-VL-7B-V1.5-GGUF/resolve/main/EraX-VL-7B-V1.5.f16.gguf) | f16 | 15.3 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
akashnj/pythia-financial-lora
akashnj
2025-06-01T10:59:32Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-01T10:59:28Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/OS-Genesis-7B-AC-GGUF
mradermacher
2025-06-01T10:56:54Z
28
1
transformers
[ "transformers", "gguf", "en", "base_model:OS-Copilot/OS-Genesis-7B-AC", "base_model:quantized:OS-Copilot/OS-Genesis-7B-AC", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-01-02T10:16:15Z
--- base_model: OS-Copilot/OS-Genesis-7B-AC language: - en library_name: transformers license: apache-2.0 no_imatrix: nan detected in blk.27.attn_q.weight quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/OS-Copilot/OS-Genesis-7B-AC <!-- provided-files --> ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/OS-Genesis-7B-AC-GGUF/resolve/main/OS-Genesis-7B-AC.mmproj-fp16.gguf) | mmproj-fp16 | 1.5 | multi-modal supplement | | [GGUF](https://huggingface.co/mradermacher/OS-Genesis-7B-AC-GGUF/resolve/main/OS-Genesis-7B-AC.Q2_K.gguf) | Q2_K | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/OS-Genesis-7B-AC-GGUF/resolve/main/OS-Genesis-7B-AC.Q3_K_S.gguf) | Q3_K_S | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/OS-Genesis-7B-AC-GGUF/resolve/main/OS-Genesis-7B-AC.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/OS-Genesis-7B-AC-GGUF/resolve/main/OS-Genesis-7B-AC.Q3_K_L.gguf) | Q3_K_L | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/OS-Genesis-7B-AC-GGUF/resolve/main/OS-Genesis-7B-AC.IQ4_XS.gguf) | IQ4_XS | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/OS-Genesis-7B-AC-GGUF/resolve/main/OS-Genesis-7B-AC.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/OS-Genesis-7B-AC-GGUF/resolve/main/OS-Genesis-7B-AC.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/OS-Genesis-7B-AC-GGUF/resolve/main/OS-Genesis-7B-AC.Q5_K_S.gguf) | Q5_K_S | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/OS-Genesis-7B-AC-GGUF/resolve/main/OS-Genesis-7B-AC.Q5_K_M.gguf) | Q5_K_M | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/OS-Genesis-7B-AC-GGUF/resolve/main/OS-Genesis-7B-AC.Q6_K.gguf) | Q6_K | 6.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/OS-Genesis-7B-AC-GGUF/resolve/main/OS-Genesis-7B-AC.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/OS-Genesis-7B-AC-GGUF/resolve/main/OS-Genesis-7B-AC.f16.gguf) | f16 | 15.3 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
spawnhr-lab/tom
spawnhr-lab
2025-06-01T10:54:05Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-01T10:18:20Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: tom --- # Tom <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `tom` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "tom", "lora_weights": "https://huggingface.co/spawnhr-lab/tom/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('spawnhr-lab/tom', weight_name='lora.safetensors') image = pipeline('tom').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/spawnhr-lab/tom/discussions) to add images that show off what you’ve made with this LoRA.
mradermacher/visionparser-vl-expert-GGUF
mradermacher
2025-06-01T10:53:06Z
135
2
transformers
[ "transformers", "gguf", "document-parsing", "information-extraction", "unsloth", "qwen2_vl", "en", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-01-03T03:05:27Z
--- base_model: critical-hf/visionparser-vl-expert language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - document-parsing - information-extraction - transformers - unsloth - qwen2_vl --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/critical-hf/visionparser-vl-expert <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/visionparser-vl-expert-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/visionparser-vl-expert-GGUF/resolve/main/visionparser-vl-expert.mmproj-fp16.gguf) | mmproj-fp16 | 1.5 | multi-modal supplement | | [GGUF](https://huggingface.co/mradermacher/visionparser-vl-expert-GGUF/resolve/main/visionparser-vl-expert.Q2_K.gguf) | Q2_K | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/visionparser-vl-expert-GGUF/resolve/main/visionparser-vl-expert.Q3_K_S.gguf) | Q3_K_S | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/visionparser-vl-expert-GGUF/resolve/main/visionparser-vl-expert.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/visionparser-vl-expert-GGUF/resolve/main/visionparser-vl-expert.Q3_K_L.gguf) | Q3_K_L | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/visionparser-vl-expert-GGUF/resolve/main/visionparser-vl-expert.IQ4_XS.gguf) | IQ4_XS | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/visionparser-vl-expert-GGUF/resolve/main/visionparser-vl-expert.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/visionparser-vl-expert-GGUF/resolve/main/visionparser-vl-expert.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/visionparser-vl-expert-GGUF/resolve/main/visionparser-vl-expert.Q5_K_S.gguf) | Q5_K_S | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/visionparser-vl-expert-GGUF/resolve/main/visionparser-vl-expert.Q5_K_M.gguf) | Q5_K_M | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/visionparser-vl-expert-GGUF/resolve/main/visionparser-vl-expert.Q6_K.gguf) | Q6_K | 6.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/visionparser-vl-expert-GGUF/resolve/main/visionparser-vl-expert.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/visionparser-vl-expert-GGUF/resolve/main/visionparser-vl-expert.f16.gguf) | f16 | 15.3 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
Tina94/alik
Tina94
2025-06-01T10:51:30Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-01T10:37:40Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: alik --- # Alik <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `alik` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "alik", "lora_weights": "https://huggingface.co/Tina94/alik/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('Tina94/alik', weight_name='lora.safetensors') image = pipeline('alik').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 1000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/Tina94/alik/discussions) to add images that show off what you’ve made with this LoRA.
kdzd/DeepSeek-R1-Distill-Qwen-7B-FinQA-SFT
kdzd
2025-06-01T10:50:25Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-01T10:50:06Z
--- base_model: unsloth/deepseek-r1-distill-qwen-7b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** kdzd - **License:** apache-2.0 - **Finetuned from model :** unsloth/deepseek-r1-distill-qwen-7b-unsloth-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
RafatK/Whisper_Largev2-Swahili-Decodis_Comb_FT
RafatK
2025-06-01T10:42:31Z
59
1
null
[ "safetensors", "whisper", "automatic-speech-recognition", "sw", "dataset:google/fleurs", "dataset:mozilla-foundation/common_voice_11_0", "dataset:openslr/openslr", "base_model:openai/whisper-large-v2", "base_model:finetune:openai/whisper-large-v2", "license:cc-by-nc-4.0", "region:us" ]
automatic-speech-recognition
2025-05-21T19:52:10Z
--- license: cc-by-nc-4.0 datasets: - google/fleurs - mozilla-foundation/common_voice_11_0 - openslr/openslr language: - sw metrics: - wer base_model: - openai/whisper-large-v2 pipeline_tag: automatic-speech-recognition --- <p align="left"> <a href="https://decodis.com/"> <img src="https://static.wixstatic.com/media/41bde8_fdfad2782d8641edb098e72f1ea10d65~mv2.png/v1/fill/w_185,h_50,al_c,q_85,usm_0.66_1.00_0.01,enc_avif,quality_auto/41bde8_fdfad2782d8641edb098e72f1ea10d65~mv2.png" style="display: inline-block; vertical-align: middle;" alt="DECODIS_Website" /> </a> </p> # 🎤 General-Purpose Swahili ASR Model (Open Datasets + Domain Data) <p> <a href="https://github.com/Rafat-decodis/Robust-ASR-for-Low-Resource-Languages/tree/main" target="_blank" style="margin: 2px;"> <img src="https://img.shields.io/badge/Decodis-Indepth Analysis-536af5?color=536af5&logo=github" style="display: inline-block; vertical-align: middle;" alt="Main code" /> </a> </p> This automatic speech recognition (ASR) model is trained using three open multilingual datasets and a multi-domain in-house dataset — Mozilla Common Voice, Google's FLEURS, OpenSLR, and DPP/DP dataset (in-house)— to provide **high-accuracy transcription** for clean, read-aloud **Swahili** speech. It achieves **strong generalization**, maintaining benchmark accuracy while improving performance on real-world test data. The model can do well for both clean and noisy audios. <!-- **Model is Finetuned by** [DECODIS](https://www.decodis.com/) --> This model is part of a full ASR ablation study that analyzes and understands the robustness of data and in dealing with different modes and variations of data collections. 👉 View all models on [GitHub](https://github.com/Rafat-decodis/Robust_Swahili_ASR) **We are particularly interested in validating the conclusions we’ve observed through our ablation studies**: While benchmark datasets like FLEURS are useful for comparison, they do not fully capture the variability and challenges of real-world speech — especially for underrepresented languages like Swahili. We are inviting the community to try out these models and help assess: 1. How well the models perform on natural, conversational, or noisy audio 2. Open-source datasets (like Common Voice & FLEURS) perform well on clean, benchmark speech. 3. Whether the improvements we've seen in combining diverse datasets generalize to your use case 4. Gaps between benchmark results and real-world usability 5. A combination of both yields balanced results but depends on data quality and label accuracy. ## Model [Whisper](https://github.com/openai/whisper) is a general-purpose speech recognition model. It is trained on a large dataset of diverse audio and is also a multitasking model that can perform multilingual speech recognition, speech translation, and language identification. --- ## 🚀 How to Use ```python from transformers import WhisperForConditionalGeneration, WhisperProcessor from transformers import pipeline from transformers.utils import is_flash_attn_2_available processor = WhisperProcessor.from_pretrained("openai/whisper-large-v2") model = WhisperForConditionalGeneration.from_pretrained("RafatK/Swahili-Whisper_Largev2-Decodis_Comb_FT", torch_dtype=torch.float16).to("cuda") model.generation_config.input_ids = model.generation_config.forced_decoder_ids model.generation_config.forced_decoder_ids = None forced_decoder_ids = processor.get_decoder_prompt_ids(language="swahili", task="transcribe") pipe = pipeline( "automatic-speech-recognition", model=model, processor = "openai/whisper-large-v2", tokenizer = "openai/whisper-large-v2", feature_extractor = "openai/whisper-large-v2", chunk_length_s=15, device=device, model_kwargs={"attn_implementation": "flash_attention_2"} if is_flash_attn_2_available() else {"attn_implementation": "sdpa"}, generate_kwargs = { 'num_beams':5, 'max_new_tokens':440, 'early_stopping':True, } ) text_output = pipe("audio.wav")['text'] ``` --- ## 📦 Training Data - **Common Voice 11.0** Crowdsourced dataset with validated Swahili recordings (~350 hours) - **FLEURS** Google’s multilingual dataset with 102 languages including Swahili (~50 hours) - **OpenSLR** African Languages in the Field: speech Fundamentals and Automation - **DPP and DP data** (Description) 📊 **Total Duration**: ~400 hours --- 📁 **Languages**: Swahili (`sw`) --- ## 🏋️‍♂️ Training Strategy - Architecture: `whisper-large-v2` - Framework: Whisper and Huggingface Transformers - Sampling rate: 16 kHz - Preprocessing: Volume normalization, High-Grade noise addition, Prosodic Augmentation, silence trimming - Learning Rate: 1e-5 - Optimizer: Adamw_pytorch - Steps: 3000 - Pretrained on open data - Fine-tuned on domain data --- ## 📈 Evaluation Metric (WER) | Dataset | This Model | Whisper Large V2| |----------------------|------------|-----------------| | **FLEURS (benchmark)** | **12.41** | **39.40** | | **[Decodis Test Set](https://huggingface.co/datasets/RafatK/Decodis_Test_Set) (Collected by DECODIS)** | **39.42** | **99.98** | --- ## 🎯 Intended Use - General-purpose transcription systems - Balanced performance on clean and noisy data - Speech interfaces in multilingual and informal settings --- ## ⚠️ Limitations - Slight trade-off in benchmark precision - May need more domain data for extreme acoustic variation --- 📝 Please try the models and share your feedback, issues, or results via: GitHub Issues: Submit an issue Hugging Face Discussions: Join the conversation Your feedback will help us refine our dataset and improve ASR for underrepresented languages like Swahili and Yoruba. ---
RafatK/Whisper_Largev2-Swahili-Decodis_Base
RafatK
2025-06-01T10:40:39Z
85
0
null
[ "safetensors", "whisper", "automatic-speech-recognition", "sw", "dataset:google/fleurs", "dataset:mozilla-foundation/common_voice_11_0", "dataset:openslr/openslr", "base_model:openai/whisper-large-v2", "base_model:finetune:openai/whisper-large-v2", "license:cc-by-nc-4.0", "region:us" ]
automatic-speech-recognition
2025-05-20T22:04:23Z
--- license: cc-by-nc-4.0 datasets: - google/fleurs - mozilla-foundation/common_voice_11_0 - openslr/openslr language: - sw metrics: - wer base_model: - openai/whisper-large-v2 pipeline_tag: automatic-speech-recognition --- <p align="left"> <a href="https://decodis.com/"> <img src="https://static.wixstatic.com/media/41bde8_fdfad2782d8641edb098e72f1ea10d65~mv2.png/v1/fill/w_185,h_50,al_c,q_85,usm_0.66_1.00_0.01,enc_avif,quality_auto/41bde8_fdfad2782d8641edb098e72f1ea10d65~mv2.png" style="display: inline-block; vertical-align: middle;" alt="DECODIS_Website" /> </a> </p> # 🧠 High-Accuracy ASR Model for Swahili for Clean Speech (Common Voice + FLEURS + OpenSLR) <p> <a href="https://github.com/Rafat-decodis/Robust-ASR-for-Low-Resource-Languages/tree/main" target="_blank" style="margin: 2px;"> <img src="https://img.shields.io/badge/Decodis-Indepth Analysis-536af5?color=536af5&logo=github" style="display: inline-block; vertical-align: middle;" alt="Main code" /> </a> </p> This automatic speech recognition (ASR) model is trained using three open multilingual datasets — Mozilla Common Voice, Google's FLEURS, and OpenSLR — to provide **high-accuracy transcription** for clean, read-aloud **Swahili** speech. It is ideal for tasks involving clean and well-structured and clean speech input, such as reading assistants, or general-purpose multilingual transcription. This model is part of a full ASR ablation study that analyzes and understands the robustness of data and in dealing with different modes and variations of data collections. 👉 View all models on [GitHub](https://github.com/Rafat-decodis/Robust-ASR-for-Low-Resource-Languages) **We are particularly interested in validating the conclusions we’ve observed through our ablation studies**: While benchmark datasets like FLEURS are useful for comparison, they do not fully capture the variability and challenges of real-world speech — especially for underrepresented languages like Swahili and Yoruba. We are inviting the community to try out these models and help assess: 1. How well the models perform on natural, conversational, or noisy audio 2. Open-source datasets (like Common Voice & FLEURS) perform well on clean, benchmark speech. 3. Whether the improvements we've seen in combining diverse datasets generalize to your use case 4. Gaps between benchmark results and real-world usability 5. A combination of both yields balanced results but depends on data quality and label accuracy. ## Model [Whisper](https://github.com/openai/whisper) is a general-purpose speech recognition model. It is trained on a large dataset of diverse audio and is also a multitasking model that can perform multilingual speech recognition, speech translation, and language identification. --- ## 🚀 How to Use ```python from transformers import WhisperForConditionalGeneration, WhisperProcessor from transformers import pipeline from transformers.utils import is_flash_attn_2_available processor = WhisperProcessor.from_pretrained("openai/whisper-large-v2") model = WhisperForConditionalGeneration.from_pretrained("RafatK/Whisper_Largev2-Swahili-Decodis_Base", torch_dtype=torch.float16).to("cuda") model.generation_config.input_ids = model.generation_config.forced_decoder_ids model.generation_config.forced_decoder_ids = None pipe = pipeline( "automatic-speech-recognition", model=model, processor = "openai/whisper-large-v2", tokenizer = "openai/whisper-large-v2", feature_extractor = "openai/whisper-large-v2", chunk_length_s=15, device=device, model_kwargs={"attn_implementation": "flash_attention_2"} if is_flash_attn_2_available() else {"attn_implementation": "sdpa"}, generate_kwargs = { 'num_beams':5, 'max_new_tokens':440, 'early_stopping':True, 'language': 'swahili', 'task': 'transcribe' } ) text_output = pipe("audio.wav")['text'] ``` --- ## 📦 Training Data - **Common Voice 11.0** Crowdsourced dataset with validated Swahili recordings (~350 hours) - **FLEURS** Google’s multilingual dataset with 102 languages including Swahili (~50 hours) - **OpenSLR** African Languages in the Field: speech Fundamentals and Automation 📊 **Total Duration**: ~400 hours --- 📁 **Languages**: Swahili (`sw`) --- ## 🏋️‍♂️ Training Setup - Architecture: `whisper-large-v2` - Framework: Whisper and Huggingface Transformers - Sampling rate: 16 kHz - Preprocessing: Volume normalization, High-Grade noise addition, Prosodic Augmentation, silence trimming - Learning Rate: 1e-5 - Optimizer: Adamw_pytorch - Steps: 3000 --- ## 📦 Evaluation Data - **FLEURS** - **[Decodis Test Set](https://huggingface.co/datasets/RafatK/Decodis_Test_Set) (Collected by DECODIS)** --- ## 📈 Evaluation Metric (WER) | Dataset | This Model | Whisper Large V2| |----------------------|------------|-----------------| | **FLEURS (benchmark)** | **13.31** | **39.40** | | **Our test set** | **69.86** | **99.98** | ## 🎯 Intended Use This model performs best in: - Read or dictated speech - Clean environments with minimal noise - Evaluation benchmarks like FLEURS **Not** recommended for real-world noisy conditions without domain adaptation. --- ## ⚠️ Limitations - Poor generalization to conversational or spontaneous speech - Sensitive to background noise and overlapping speakers - Accents outside training data may reduce accuracy --- 📝 Please try the models and share your feedback, issues, or results via: GitHub Issues: Submit an issue Hugging Face Discussions: Join the conversation Your feedback will help us refine our dataset and improve ASR for underrepresented languages like Swahili and Yoruba. ---
DOFOFFICIAL/DeepSeek-R1-0528-Qwen3-8B-Correct-GGUF
DOFOFFICIAL
2025-06-01T10:37:02Z
0
1
transformers
[ "transformers", "gguf", "arxiv:2501.12948", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
null
2025-06-01T05:33:59Z
--- license: mit library_name: transformers --- # --About this Quant Model-- **This quant has been extensively tested by our team and ensured to be the BEST (up to now) GGUF models** **Previously, from somefamous repositories, we just got GARBAGE quant models and performed like a shit** **Remember, some orginizations just care about QUANTITY but not QUALITY** **But don't worry, we have provided a CORRECT GGUF quant** *The following referes to original model card* ---- # DeepSeek-R1-0528 <!-- markdownlint-disable first-line-h1 --> <!-- markdownlint-disable html --> <!-- markdownlint-disable no-duplicate-header --> <div align="center"> <img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V3" /> </div> <hr> <div align="center" style="line-height: 1;"> <a href="https://www.deepseek.com/" target="_blank" style="margin: 2px;"> <img alt="Homepage" src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/badge.svg?raw=true" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://chat.deepseek.com/" target="_blank" style="margin: 2px;"> <img alt="Chat" src="https://img.shields.io/badge/🤖%20Chat-DeepSeek%20R1-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://huggingface.co/deepseek-ai" target="_blank" style="margin: 2px;"> <img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> </div> <div align="center" style="line-height: 1;"> <a href="https://discord.gg/Tc7c45Zzu5" target="_blank" style="margin: 2px;"> <img alt="Discord" src="https://img.shields.io/badge/Discord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/qr.jpeg?raw=true" target="_blank" style="margin: 2px;"> <img alt="Wechat" src="https://img.shields.io/badge/WeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://twitter.com/deepseek_ai" target="_blank" style="margin: 2px;"> <img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> </div> <div align="center" style="line-height: 1;"> <a href="LICENSE" style="margin: 2px;"> <img alt="License" src="https://img.shields.io/badge/License-MIT-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/> </a> </div> <p align="center"> <a href="https://arxiv.org/pdf/2501.12948"><b>Paper Link</b>👁️</a> </p> ## 1. Introduction The DeepSeek R1 model has undergone a minor version upgrade, with the current version being DeepSeek-R1-0528. In the latest update, DeepSeek R1 has significantly improved its depth of reasoning and inference capabilities by leveraging increased computational resources and introducing algorithmic optimization mechanisms during post-training. The model has demonstrated outstanding performance across various benchmark evaluations, including mathematics, programming, and general logic. Its overall performance is now approaching that of leading models, such as O3 and Gemini 2.5 Pro. <p align="center"> <img width="80%" src="figures/benchmark.png"> </p> Compared to the previous version, the upgraded model shows significant improvements in handling complex reasoning tasks. For instance, in the AIME 2025 test, the model’s accuracy has increased from 70% in the previous version to 87.5% in the current version. This advancement stems from enhanced thinking depth during the reasoning process: in the AIME test set, the previous model used an average of 12K tokens per question, whereas the new version averages 23K tokens per question. Beyond its improved reasoning capabilities, this version also offers a reduced hallucination rate, enhanced support for function calling, and better experience for vibe coding. ## 2. Evaluation Results ### DeepSeek-R1-0528 For all our models, the maximum generation length is set to 64K tokens. For benchmarks requiring sampling, we use a temperature of $0.6$, a top-p value of $0.95$, and generate 16 responses per query to estimate pass@1. <div align="center"> | Category | Benchmark (Metric) | DeepSeek R1 | DeepSeek R1 0528 |----------|----------------------------------|-----------------|---| | General | | | MMLU-Redux (EM) | 92.9 | 93.4 | | MMLU-Pro (EM) | 84.0 | 85.0 | | GPQA-Diamond (Pass@1) | 71.5 | 81.0 | | SimpleQA (Correct) | 30.1 | 27.8 | | FRAMES (Acc.) | 82.5 | 83.0 | | Humanity's Last Exam (Pass@1) | 8.5 | 17.7 | Code | | | LiveCodeBench (2408-2505) (Pass@1) | 63.5 | 73.3 | | Codeforces-Div1 (Rating) | 1530 | 1930 | | SWE Verified (Resolved) | 49.2 | 57.6 | | Aider-Polyglot (Acc.) | 53.3 | 71.6 | Math | | | AIME 2024 (Pass@1) | 79.8 | 91.4 | | AIME 2025 (Pass@1) | 70.0 | 87.5 | | HMMT 2025 (Pass@1) | 41.7 | 79.4 | | | CNMO 2024 (Pass@1) | 78.8 | 86.9 | Tools | | | BFCL_v3_MultiTurn (Acc) | - | 37.0 | | | Tau-Bench (Pass@1) | - | 53.5(Airline)/63.9(Retail) </div> Note: We use Agentless framework to evaluate model performance on SWE-Verified. We only evaluate text-only prompts in HLE testsets. GPT-4.1 is employed to act user role in Tau-bench evaluation. ### DeepSeek-R1-0528-Qwen3-8B Meanwhile, we distilled the chain-of-thought from DeepSeek-R1-0528 to post-train Qwen3 8B Base, obtaining DeepSeek-R1-0528-Qwen3-8B. This model achieves state-of-the-art (SOTA) performance among open-source models on the AIME 2024, surpassing Qwen3 8B by +10.0% and matching the performance of Qwen3-235B-thinking. We believe that the chain-of-thought from DeepSeek-R1-0528 will hold significant importance for both academic research on reasoning models and industrial development focused on small-scale models. | | AIME 24 | AIME 25 | HMMT Feb 25 | GPQA Diamond | LiveCodeBench (2408-2505) | |--------------------------------|---------|---------|-------------|--------------|---------------------------| | Qwen3-235B-A22B | 85.7 | 81.5 | 62.5 | 71.1 | 66.5 | | Qwen3-32B | 81.4 | 72.9 | - | 68.4 | - | | Qwen3-8B | 76.0 | 67.3 | - | 62.0 | - | | Phi-4-Reasoning-Plus-14B | 81.3 | 78.0 | 53.6 | 69.3 | - | | Gemini-2.5-Flash-Thinking-0520 | 82.3 | 72.0 | 64.2 | 82.8 | 62.3 | | o3-mini (medium) | 79.6 | 76.7 | 53.3 | 76.8 | 65.9 | | DeepSeek-R1-0528-Qwen3-8B | 86.0 | 76.3 | 61.5 | 61.1 | 60.5 | ## 3. Chat Website & API Platform You can chat with DeepSeek-R1 on DeepSeek's official website: [chat.deepseek.com](https://chat.deepseek.com/sign_in), and switch on the button "DeepThink" We also provide OpenAI-Compatible API at DeepSeek Platform: [platform.deepseek.com](https://platform.deepseek.com/) ## 4. How to Run Locally Please visit [DeepSeek-R1](https://github.com/deepseek-ai/DeepSeek-R1) repository for more information about running DeepSeek-R1-0528 locally. Compared to previous versions of DeepSeek-R1, the usage recommendations for DeepSeek-R1-0528 have the following changes: 1. System prompt is supported now. 2. It is not required to add "\<think\>\n" at the beginning of the output to force the model into thinking pattern. The model architecture of DeepSeek-R1-0528-Qwen3-8B is identical to that of Qwen3-8B, but it shares the same tokenizer configuration as DeepSeek-R1-0528. This model can be run in the same manner as Qwen3-8B, but it is essential to ensure that all configuration files are sourced from our repository rather than the original Qwen3 project. ### System Prompt In the official DeepSeek web/app, we use the same system prompt with a specific date. ``` 该助手为DeepSeek-R1,由深度求索公司创造。 今天是{current date}。 ``` For example, ``` 该助手为DeepSeek-R1,由深度求索公司创造。 今天是2025年5月28日,星期一。 ``` ### Temperature In our web and application environments, the temperature parameter $T_{model}$ is set to 0.6. ### Prompts for File Uploading and Web Search For file uploading, please follow the template to create prompts, where {file_name}, {file_content} and {question} are arguments. ``` file_template = \ """[file name]: {file_name} [file content begin] {file_content} [file content end] {question}""" ``` For Web Search, {search_results}, {cur_date}, and {question} are arguments. For Chinese query, we use the prompt: ``` search_answer_zh_template = \ '''# 以下内容是基于用户发送的消息的搜索结果: {search_results} 在我给你的搜索结果中,每个结果都是[webpage X begin]...[webpage X end]格式的,X代表每篇文章的数字索引。请在适当的情况下在句子末尾引用上下文。请按照引用编号[citation:X]的格式在答案中对应部分引用上下文。如果一句话源自多个上下文,请列出所有相关的引用编号,例如[citation:3][citation:5],切记不要将引用集中在最后返回引用编号,而是在答案对应部分列出。 在回答时,请注意以下几点: - 今天是{cur_date}。 - 并非搜索结果的所有内容都与用户的问题密切相关,你需要结合问题,对搜索结果进行甄别、筛选。 - 对于列举类的问题(如列举所有航班信息),尽量将答案控制在10个要点以内,并告诉用户可以查看搜索来源、获得完整信息。优先提供信息完整、最相关的列举项;如非必要,不要主动告诉用户搜索结果未提供的内容。 - 对于创作类的问题(如写论文),请务必在正文的段落中引用对应的参考编号,例如[citation:3][citation:5],不能只在文章末尾引用。你需要解读并概括用户的题目要求,选择合适的格式,充分利用搜索结果并抽取重要信息,生成符合用户要求、极具思想深度、富有创造力与专业性的答案。你的创作篇幅需要尽可能延长,对于每一个要点的论述要推测用户的意图,给出尽可能多角度的回答要点,且务必信息量大、论述详尽。 - 如果回答很长,请尽量结构化、分段落总结。如果需要分点作答,尽量控制在5个点以内,并合并相关的内容。 - 对于客观类的问答,如果问题的答案非常简短,可以适当补充一到两句相关信息,以丰富内容。 - 你需要根据用户要求和回答内容选择合适、美观的回答格式,确保可读性强。 - 你的回答应该综合多个相关网页来回答,不能重复引用一个网页。 - 除非用户要求,否则你回答的语言需要和用户提问的语言保持一致。 # 用户消息为: {question}''' ``` For English query, we use the prompt: ``` search_answer_en_template = \ '''# The following contents are the search results related to the user's message: {search_results} In the search results I provide to you, each result is formatted as [webpage X begin]...[webpage X end], where X represents the numerical index of each article. Please cite the context at the end of the relevant sentence when appropriate. Use the citation format [citation:X] in the corresponding part of your answer. If a sentence is derived from multiple contexts, list all relevant citation numbers, such as [citation:3][citation:5]. Be sure not to cluster all citations at the end; instead, include them in the corresponding parts of the answer. When responding, please keep the following points in mind: - Today is {cur_date}. - Not all content in the search results is closely related to the user's question. You need to evaluate and filter the search results based on the question. - For listing-type questions (e.g., listing all flight information), try to limit the answer to 10 key points and inform the user that they can refer to the search sources for complete information. Prioritize providing the most complete and relevant items in the list. Avoid mentioning content not provided in the search results unless necessary. - For creative tasks (e.g., writing an essay), ensure that references are cited within the body of the text, such as [citation:3][citation:5], rather than only at the end of the text. You need to interpret and summarize the user's requirements, choose an appropriate format, fully utilize the search results, extract key information, and generate an answer that is insightful, creative, and professional. Extend the length of your response as much as possible, addressing each point in detail and from multiple perspectives, ensuring the content is rich and thorough. - If the response is lengthy, structure it well and summarize it in paragraphs. If a point-by-point format is needed, try to limit it to 5 points and merge related content. - For objective Q&A, if the answer is very brief, you may add one or two related sentences to enrich the content. - Choose an appropriate and visually appealing format for your response based on the user's requirements and the content of the answer, ensuring strong readability. - Your answer should synthesize information from multiple relevant webpages and avoid repeatedly citing the same webpage. - Unless the user requests otherwise, your response should be in the same language as the user's question. # The user's message is: {question}''' ``` ## 5. License This code repository is licensed under [MIT License](LICENSE). The use of DeepSeek-R1 models is also subject to [MIT License](LICENSE). DeepSeek-R1 series (including Base and Chat) supports commercial use and distillation. ## 6. Citation ``` @misc{deepseekai2025deepseekr1incentivizingreasoningcapability, title={DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning}, author={DeepSeek-AI}, year={2025}, eprint={2501.12948}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2501.12948}, } ``` ## 7. Contact If you have any questions, please raise an issue or contact us at [[email protected]]([email protected]).
KBhandari11/llama_block_3_mathematical_induction_All
KBhandari11
2025-06-01T10:36:15Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "model: llama", "repo_name: llama_block_3_mathematical_induction_All", "file_name: llama_block_3_mathematical_induction_All_5000_5.pt", "base_model: meta-llama/Llama-2-7b-hf", "pruning_style: block", "community: 3", "pruning_ratio: 20", "dataset_label: mathematical_induction", "sparsity_ratio: 20", "dataset: ['tasksource/bigbench', 'mathematical_induction']", "finetune: All", "modules_size: 18", "modules: ['10_attn.k', '10_attn.o', '11_attn.k', '14_attn.o', '15_attn.o', '15_attn.v', '16_attn.o', '17_attn.k', '19_attn.v', '20_attn.q', '22_attn.v', '24_attn.k', '25_attn.k', '27_attn.k', '30_attn.q', '4_attn.k', '7_attn.k', '8_attn.v']", "rank: 1", "tags: ['model: llama', 'repo_name: llama_block_3_mathematical_induction_All', 'file_name: llama_block_3_mathematical_induction_All_5000_5.pt', 'base_model: meta-llama/Llama-2-7b-hf', 'pruning_style: block', 'community: 3', 'pruning_ratio: 20', 'dataset_label: mathematical_induction', 'sparsity_ratio: 20', \"dataset: ['tasksource/bigbench', 'mathematical_induction']\", 'finetune: All', 'modules_size: 18', \"modules: ['10_attn.k', '10_attn.o', '11_attn.k', '14_attn.o', '15_attn.o', '15_attn.v', '16_attn.o', '17_attn.k', '19_attn.v', '20_attn.q', '22_attn.v', '24_attn.k', '25_attn.k', '27_attn.k', '30_attn.q', '4_attn.k', '7_attn.k', '8_attn.v']\", 'rank: 1']", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-01T10:31:05Z
--- library_name: transformers tags: - 'model: llama' - 'repo_name: llama_block_3_mathematical_induction_All' - 'file_name: llama_block_3_mathematical_induction_All_5000_5.pt' - 'base_model: meta-llama/Llama-2-7b-hf' - 'pruning_style: block' - 'community: 3' - 'pruning_ratio: 20' - 'dataset_label: mathematical_induction' - 'sparsity_ratio: 20' - 'dataset: [''tasksource/bigbench'', ''mathematical_induction'']' - 'finetune: All' - 'modules_size: 18' - 'modules: [''10_attn.k'', ''10_attn.o'', ''11_attn.k'', ''14_attn.o'', ''15_attn.o'', ''15_attn.v'', ''16_attn.o'', ''17_attn.k'', ''19_attn.v'', ''20_attn.q'', ''22_attn.v'', ''24_attn.k'', ''25_attn.k'', ''27_attn.k'', ''30_attn.q'', ''4_attn.k'', ''7_attn.k'', ''8_attn.v'']' - 'rank: 1' - 'tags: [''model: llama'', ''repo_name: llama_block_3_mathematical_induction_All'', ''file_name: llama_block_3_mathematical_induction_All_5000_5.pt'', ''base_model: meta-llama/Llama-2-7b-hf'', ''pruning_style: block'', ''community: 3'', ''pruning_ratio: 20'', ''dataset_label: mathematical_induction'', ''sparsity_ratio: 20'', "dataset: [''tasksource/bigbench'', ''mathematical_induction'']", ''finetune: All'', ''modules_size: 18'', "modules: [''10_attn.k'', ''10_attn.o'', ''11_attn.k'', ''14_attn.o'', ''15_attn.o'', ''15_attn.v'', ''16_attn.o'', ''17_attn.k'', ''19_attn.v'', ''20_attn.q'', ''22_attn.v'', ''24_attn.k'', ''25_attn.k'', ''27_attn.k'', ''30_attn.q'', ''4_attn.k'', ''7_attn.k'', ''8_attn.v'']", ''rank: 1'']' --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MeiKing111/v1land_18
MeiKing111
2025-06-01T10:33:23Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-06-01T10:12:39Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
veddhanth/lora-trained-xl-2
veddhanth
2025-06-01T10:33:06Z
0
0
diffusers
[ "diffusers", "tensorboard", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2025-06-01T10:20:54Z
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 library_name: diffusers license: openrail++ instance_prompt: a realistic portrait of sks face widget: [] tags: - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - veddhanth/lora-trained-xl-2 <Gallery /> ## Model description These are veddhanth/lora-trained-xl-2 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: True. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a realistic portrait of sks face to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](veddhanth/lora-trained-xl-2/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
FormlessAI/86a1796a-6cea-4a9a-9ca1-8ad777d2110b
FormlessAI
2025-06-01T10:32:36Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:unsloth/SmolLM2-360M-Instruct", "base_model:finetune:unsloth/SmolLM2-360M-Instruct", "endpoints_compatible", "region:us" ]
null
2025-06-01T07:21:03Z
--- base_model: unsloth/SmolLM2-360M-Instruct library_name: transformers model_name: 86a1796a-6cea-4a9a-9ca1-8ad777d2110b tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for 86a1796a-6cea-4a9a-9ca1-8ad777d2110b This model is a fine-tuned version of [unsloth/SmolLM2-360M-Instruct](https://huggingface.co/unsloth/SmolLM2-360M-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="FormlessAI/86a1796a-6cea-4a9a-9ca1-8ad777d2110b", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/phoenix-formless/Gradients/runs/fna6o59j) This model was trained with SFT. ### Framework versions - TRL: 0.18.0 - Transformers: 4.52.3 - Pytorch: 2.7.0+cu128 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
thoeppner/emotion_model
thoeppner
2025-06-01T10:32:00Z
0
0
null
[ "emotion-detection", "image-classification", "computer-vision", "resnet", "dataset:custom", "license:mit", "region:us" ]
image-classification
2025-05-31T20:45:44Z
--- license: mit tags: - emotion-detection - image-classification - computer-vision - resnet datasets: - custom --- # emotion_model-thoeppner Dieses Modell ist eine feinjustierte Version von `resnet-18` für einen benutzerdefinierten Emotionsdatensatz. Es erzielt die folgenden Ergebnisse auf der Validierungsmenge: - **Loss**: 0.8515 - **Accuracy**: 71.60 % ## Modell Beschreibung Das Modell basiert auf der ResNet-18 Architektur und wurde mithilfe von PyTorch auf einem Emotionsdatensatz trainiert. Ziel war es, ein robustes Klassifikationsmodell zur Erkennung von 9 Emotionen in Gesichtern zu entwickeln. ## Einsatzbereiche und Einschränkungen **Verwendungszweck:** - Klassifizierung von Emotionen zu Bildungs-, Analyse- oder Demonstrationszwecken - Kann in Gradio-Anwendungen zur Emotionserkennung auf hochgeladenen Gesichtern genutzt werden **Einschränkungen:** - Nicht geeignet für Emotionen ausserhalb der 9 trainierten Klassen - Erkennung kann bei schlechten Lichtverhältnissen, verdeckten Gesichtern oder untypischen Gesichtsausdrücken ungenau sein - Modell benötigt klare, frontal aufgenommene Gesichter für optimale Ergebnisse ## Trainings- und Evaluationsdaten Das Modell wurde auf einem privaten Emotionsdatensatz mit **64 864** Bildern in **9 Klassen** trainiert. **Klassen:** - happy - sad - angry - surprised - fear - disgust - neutral - contempt - unknown **Aufteilung des Datensatzes:** - 80 % Training - 20 % Validierung (interner Split) ## Training der Hyperparameter Die folgenden Hyperparameter wurden beim Training verwendet: | Hyperparameter | Wert | |-----------------------|----------------| | Optimizer | Adam | | Learning Rate | 0.0005 (mit ReduceLROnPlateau Scheduler) | | Batch Size | 64 | | Loss Function | CrossEntropyLoss | | Early Stopping | nach 5 Epochen ohne Verbesserung | | Maximale Epochen | 6 | | Eingabebildgröße | 224 × 224 Pixel | ## Training Resultate | Epoche | Train Loss | Train Accuracy | Val Loss | Val Accuracy | |-------|------------|----------------|----------|--------------| | 1 | 0.9685 | 65.72 % | 0.9370 | 66.70 % | | 2 | 0.7705 | 72.42 % | 1.0514 | 62.30 % | | 3 | 0.7409 | 73.08 % | 1.0166 | 64.90 % | | 4 | 0.6329 | 77.15 % | 0.9834 | 65.70 % | | 5 | 0.4704 | 83.40 % | 0.8515 | 71.60 % | | 6 | 0.3543 | 87.83 % | 0.9220 | 70.80 % | - **Bestes Validation Loss**: **0.8515** - **Bestes Validation Accuracy**: **71.60 %** --- ## Vergleich mit Zero-Shot-Modell (CLIP) Zur Einordnung der Leistung wurde das Modell mit einem Zero-Shot-Klassifikationsmodell `openai/clip-vit-base-patch32` verglichen. Beide Modelle wurden auf dem identischen Testset (5 000 Bilder) ausgewertet. ### Zero-Shot Modell: - **Modell**: CLIP (ViT-Base, Patch-32) - **Aufgabe**: Zero-Shot Image Classification - **Keine Feinjustierung** – nutzt nur Text-Bild-Verständnis ### Beobachtungen: - Das ResNet-18 Modell erzielt eine deutlich höhere Accuracy auf bekannten Emotionen und kann feine Unterschiede zwischen Emotionen besser erkennen. - Das CLIP-Modell zeigte gute Generalisierung auf bisher unbekannte Emotionen, hatte jedoch insgesamt eine geringere Genauigkeit. ### Zero-Shot Ergebnisse: - **Accuracy**: 10.0 % - **Macro Avg F1-Score**: 7 % - **Weighted Avg F1-Score**: 5 % --- ## Framework Versionen | Bibliothek | Version | |----------------|---------------| | Transformers | 4.52.2 | | PyTorch | 2.6.0+cu124 | | Datasets | 3.4.1 | | Tokenizers | 0.21.1 | | scikit-learn | 1.4.2 | | Matplotlib | 3.8.4 | | Seaborn | 0.13.2 |
mradermacher/OS-Genesis-7B-AW-GGUF
mradermacher
2025-06-01T10:26:55Z
41
0
transformers
[ "transformers", "gguf", "en", "base_model:OS-Copilot/OS-Genesis-7B-AW", "base_model:quantized:OS-Copilot/OS-Genesis-7B-AW", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-01-08T08:03:09Z
--- base_model: OS-Copilot/OS-Genesis-7B-AW language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> static quants of https://huggingface.co/OS-Copilot/OS-Genesis-7B-AW <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/OS-Genesis-7B-AW-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/OS-Genesis-7B-AW-GGUF/resolve/main/OS-Genesis-7B-AW.mmproj-fp16.gguf) | mmproj-fp16 | 1.5 | multi-modal supplement | | [GGUF](https://huggingface.co/mradermacher/OS-Genesis-7B-AW-GGUF/resolve/main/OS-Genesis-7B-AW.Q2_K.gguf) | Q2_K | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/OS-Genesis-7B-AW-GGUF/resolve/main/OS-Genesis-7B-AW.Q3_K_S.gguf) | Q3_K_S | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/OS-Genesis-7B-AW-GGUF/resolve/main/OS-Genesis-7B-AW.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/OS-Genesis-7B-AW-GGUF/resolve/main/OS-Genesis-7B-AW.Q3_K_L.gguf) | Q3_K_L | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/OS-Genesis-7B-AW-GGUF/resolve/main/OS-Genesis-7B-AW.IQ4_XS.gguf) | IQ4_XS | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/OS-Genesis-7B-AW-GGUF/resolve/main/OS-Genesis-7B-AW.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/OS-Genesis-7B-AW-GGUF/resolve/main/OS-Genesis-7B-AW.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/OS-Genesis-7B-AW-GGUF/resolve/main/OS-Genesis-7B-AW.Q5_K_S.gguf) | Q5_K_S | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/OS-Genesis-7B-AW-GGUF/resolve/main/OS-Genesis-7B-AW.Q5_K_M.gguf) | Q5_K_M | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/OS-Genesis-7B-AW-GGUF/resolve/main/OS-Genesis-7B-AW.Q6_K.gguf) | Q6_K | 6.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/OS-Genesis-7B-AW-GGUF/resolve/main/OS-Genesis-7B-AW.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/OS-Genesis-7B-AW-GGUF/resolve/main/OS-Genesis-7B-AW.f16.gguf) | f16 | 15.3 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
ZERONE182/ppo-LunarLander-v2
ZERONE182
2025-06-01T10:25:27Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-06-01T10:25:07Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 216.04 +/- 67.77 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
MehdiAslam/speecht5_Mehdi_Final_Model
MehdiAslam
2025-06-01T10:21:12Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "speecht5", "text-to-audio", "generated_from_trainer", "base_model:microsoft/speecht5_tts", "base_model:finetune:microsoft/speecht5_tts", "license:mit", "endpoints_compatible", "region:us" ]
text-to-audio
2025-06-01T08:28:46Z
--- library_name: transformers license: mit base_model: microsoft/speecht5_tts tags: - generated_from_trainer model-index: - name: speecht5_Mehdi_Final_Model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # speecht5_Mehdi_Final_Model This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 0.4070 - eval_runtime: 11.4273 - eval_samples_per_second: 23.453 - eval_steps_per_second: 11.726 - epoch: 36.1860 - step: 2750 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 200 - training_steps: 3000 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.52.2 - Pytorch 2.6.0+cu124 - Datasets 2.14.4 - Tokenizers 0.21.1
zhoushch/unit3-dqn-SpaceInvadersNoFrameskip-v4
zhoushch
2025-06-01T10:20:54Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-06-01T10:20:15Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 597.00 +/- 187.83 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib SBX (SB3 + Jax): https://github.com/araffin/sbx Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga zhoushch -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga zhoushch -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga zhoushch ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
MeiKing111/v1land_20
MeiKing111
2025-06-01T10:19:34Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-06-01T10:12:56Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
tscstudios/yeg0vybqtlwqjx57q9zteooqgwo1_1ad28e9e-b2e5-4869-a261-bc3f168d0f3c
tscstudios
2025-06-01T10:19:25Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-01T10:19:24Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: TOK --- # Yeg0Vybqtlwqjx57Q9Zteooqgwo1_1Ad28E9E B2E5 4869 A261 Bc3F168D0F3C <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `TOK` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "TOK", "lora_weights": "https://huggingface.co/tscstudios/yeg0vybqtlwqjx57q9zteooqgwo1_1ad28e9e-b2e5-4869-a261-bc3f168d0f3c/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('tscstudios/yeg0vybqtlwqjx57q9zteooqgwo1_1ad28e9e-b2e5-4869-a261-bc3f168d0f3c', weight_name='lora.safetensors') image = pipeline('TOK').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 1000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/tscstudios/yeg0vybqtlwqjx57q9zteooqgwo1_1ad28e9e-b2e5-4869-a261-bc3f168d0f3c/discussions) to add images that show off what you’ve made with this LoRA.
KhalidKhader/GPU-Project-qwen-1.5b-BZU-optimized
KhalidKhader
2025-06-01T10:18:05Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-01T09:14:27Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
VIDEOS-18-Anjali-Arora-Videos/FULL.VIDEO.Anjali.Arora.Viral.Video.Tutorial.Official
VIDEOS-18-Anjali-Arora-Videos
2025-06-01T10:17:34Z
0
0
null
[ "region:us" ]
null
2025-06-01T10:17:17Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
KBhandari11/llama_block_2_formal_fallacies_syllogisms_negation_Community
KBhandari11
2025-06-01T10:12:24Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "model: llama", "repo_name: llama_block_2_formal_fallacies_syllogisms_negation_Community", "file_name: llama_block_2_formal_fallacies_syllogisms_negation_Community_5000_5.pt", "base_model: meta-llama/Llama-2-7b-hf", "pruning_style: block", "community: 2", "pruning_ratio: 20", "dataset_label: formal_fallacies_syllogisms_negation", "sparsity_ratio: 20", "dataset: ['tasksource/bigbench', 'formal_fallacies_syllogisms_negation']", "finetune: Community", "modules_size: 28", "modules: ['11_attn.o', '11_attn.v', '12_attn.k', '12_attn.o', '13_attn.v', '15_attn.q', '17_attn.o', '18_attn.v', '19_attn.k', '19_attn.o', '19_attn.q', '21_attn.q', '23_attn.q', '24_attn.v', '25_attn.q', '26_attn.o', '26_attn.v', '27_attn.q', '27_attn.v', '29_attn.v', '4_attn.q', '4_attn.v', '5_attn.k', '5_attn.o', '5_attn.v', '8_attn.k', '9_attn.o', '9_attn.q']", "rank: 2", "tags: ['model: llama', 'repo_name: llama_block_2_formal_fallacies_syllogisms_negation_Community', 'file_name: llama_block_2_formal_fallacies_syllogisms_negation_Community_5000_5.pt', 'base_model: meta-llama/Llama-2-7b-hf', 'pruning_style: block', 'community: 2', 'pruning_ratio: 20', 'dataset_label: formal_fallacies_syllogisms_negation', 'sparsity_ratio: 20', \"dataset: ['tasksource/bigbench', 'formal_fallacies_syllogisms_negation']\", 'finetune: Community', 'modules_size: 28', \"modules: ['11_attn.o', '11_attn.v', '12_attn.k', '12_attn.o', '13_attn.v', '15_attn.q', '17_attn.o', '18_attn.v', '19_attn.k', '19_attn.o', '19_attn.q', '21_attn.q', '23_attn.q', '24_attn.v', '25_attn.q', '26_attn.o', '26_attn.v', '27_attn.q', '27_attn.v', '29_attn.v', '4_attn.q', '4_attn.v', '5_attn.k', '5_attn.o', '5_attn.v', '8_attn.k', '9_attn.o', '9_attn.q']\", 'rank: 2']", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-01T10:07:15Z
--- library_name: transformers tags: - 'model: llama' - 'repo_name: llama_block_2_formal_fallacies_syllogisms_negation_Community' - 'file_name: llama_block_2_formal_fallacies_syllogisms_negation_Community_5000_5.pt' - 'base_model: meta-llama/Llama-2-7b-hf' - 'pruning_style: block' - 'community: 2' - 'pruning_ratio: 20' - 'dataset_label: formal_fallacies_syllogisms_negation' - 'sparsity_ratio: 20' - 'dataset: [''tasksource/bigbench'', ''formal_fallacies_syllogisms_negation'']' - 'finetune: Community' - 'modules_size: 28' - 'modules: [''11_attn.o'', ''11_attn.v'', ''12_attn.k'', ''12_attn.o'', ''13_attn.v'', ''15_attn.q'', ''17_attn.o'', ''18_attn.v'', ''19_attn.k'', ''19_attn.o'', ''19_attn.q'', ''21_attn.q'', ''23_attn.q'', ''24_attn.v'', ''25_attn.q'', ''26_attn.o'', ''26_attn.v'', ''27_attn.q'', ''27_attn.v'', ''29_attn.v'', ''4_attn.q'', ''4_attn.v'', ''5_attn.k'', ''5_attn.o'', ''5_attn.v'', ''8_attn.k'', ''9_attn.o'', ''9_attn.q'']' - 'rank: 2' - 'tags: [''model: llama'', ''repo_name: llama_block_2_formal_fallacies_syllogisms_negation_Community'', ''file_name: llama_block_2_formal_fallacies_syllogisms_negation_Community_5000_5.pt'', ''base_model: meta-llama/Llama-2-7b-hf'', ''pruning_style: block'', ''community: 2'', ''pruning_ratio: 20'', ''dataset_label: formal_fallacies_syllogisms_negation'', ''sparsity_ratio: 20'', "dataset: [''tasksource/bigbench'', ''formal_fallacies_syllogisms_negation'']", ''finetune: Community'', ''modules_size: 28'', "modules: [''11_attn.o'', ''11_attn.v'', ''12_attn.k'', ''12_attn.o'', ''13_attn.v'', ''15_attn.q'', ''17_attn.o'', ''18_attn.v'', ''19_attn.k'', ''19_attn.o'', ''19_attn.q'', ''21_attn.q'', ''23_attn.q'', ''24_attn.v'', ''25_attn.q'', ''26_attn.o'', ''26_attn.v'', ''27_attn.q'', ''27_attn.v'', ''29_attn.v'', ''4_attn.q'', ''4_attn.v'', ''5_attn.k'', ''5_attn.o'', ''5_attn.v'', ''8_attn.k'', ''9_attn.o'', ''9_attn.q'']", ''rank: 2'']' --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hexuan21/dpsk-r1-dis-llama-8b_eqa_lora_sft
hexuan21
2025-06-01T10:12:23Z
0
0
peft
[ "peft", "safetensors", "llama-factory", "lora", "generated_from_trainer", "base_model:deepseek-ai/DeepSeek-R1-Distill-Llama-8B", "base_model:adapter:deepseek-ai/DeepSeek-R1-Distill-Llama-8B", "license:other", "region:us" ]
null
2025-06-01T10:12:19Z
--- library_name: peft license: other base_model: deepseek-ai/DeepSeek-R1-Distill-Llama-8B tags: - llama-factory - lora - generated_from_trainer model-index: - name: dpsk-r1-dis-llama-8b_eqa_lora_sft results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # dpsk-r1-dis-llama-8b_eqa_lora_sft This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Llama-8B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B) on the energy_qa_alpaca dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 8 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1.0 ### Training results ### Framework versions - PEFT 0.15.2 - Transformers 4.52.4 - Pytorch 2.7.0+cu126 - Datasets 3.6.0 - Tokenizers 0.21.1
New-Viral-Anjali-Arora-Viral-Video/FULL.VIDEO.LINK.Anjali.Arora.Viral.Video.Leaks.Official
New-Viral-Anjali-Arora-Viral-Video
2025-06-01T10:09:12Z
0
0
null
[ "region:us" ]
null
2025-06-01T10:08:58Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
jonlecumberri/MNLP_M3_mcqa_model_v2
jonlecumberri
2025-06-01T10:08:19Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-01T10:06:50Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
werent4/gliclass-audio-no-audioset
werent4
2025-06-01T10:08:07Z
0
0
transformers
[ "transformers", "safetensors", "GLiClass", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-01T10:05:14Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
suku9/pretrained_model
suku9
2025-06-01T10:04:34Z
5
0
transformers
[ "transformers", "safetensors", "gpt2", "generated_from_trainer", "text-generation-inference", "endpoints_compatible", "region:us" ]
null
2025-05-25T06:56:57Z
--- library_name: transformers tags: - generated_from_trainer model-index: - name: pretrained_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pretrained_model This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5383 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0006 - train_batch_size: 1024 - eval_batch_size: 1024 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.95) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100000 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:------:|:---------------:| | 0.5377 | 0.4546 | 100000 | 0.5496 | | 0.5313 | 0.9092 | 200000 | 0.5442 | | 0.5307 | 1.3639 | 300000 | 0.5422 | | 0.5282 | 1.8185 | 400000 | 0.5406 | | 0.5266 | 2.2731 | 500000 | 0.5393 | | 0.5248 | 2.7277 | 600000 | 0.5383 | ### Framework versions - Transformers 4.51.1 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
xTimeCrystal/RWKV-7-25M-Base
xTimeCrystal
2025-06-01T10:03:36Z
0
0
null
[ "text-generation", "en", "dataset:HuggingFaceFW/fineweb-edu", "dataset:open-web-math/open-web-math", "arxiv:1910.09700", "license:mit", "region:us" ]
text-generation
2025-06-01T03:47:00Z
--- license: mit datasets: - HuggingFaceFW/fineweb-edu - open-web-math/open-web-math language: - en pipeline_tag: text-generation --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** xTimeCrystal - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** RWKV 7 - **Language(s) (NLP):** English - **License:** MIT - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** bf16 non-mixed precision, used own version of Muon with lr from 5e-3 to 1e-3. <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] Throughput = infinite [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results Bits-per-byte: ~1 HellaSwag Accuracy: 33.4% (removed Wikihow entries) #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hexuan21/qwen3-1-7b_eqa_lora_sft
hexuan21
2025-06-01T10:03:24Z
0
0
peft
[ "peft", "safetensors", "llama-factory", "lora", "generated_from_trainer", "base_model:Qwen/Qwen3-1.7B", "base_model:adapter:Qwen/Qwen3-1.7B", "license:other", "region:us" ]
null
2025-06-01T10:03:16Z
--- library_name: peft license: other base_model: Qwen/Qwen3-1.7B tags: - llama-factory - lora - generated_from_trainer model-index: - name: qwen3-1-7b_eqa_lora_sft results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # qwen3-1-7b_eqa_lora_sft This model is a fine-tuned version of [Qwen/Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B) on the energy_qa_alpaca dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 8 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1.0 ### Training results ### Framework versions - PEFT 0.15.2 - Transformers 4.52.4 - Pytorch 2.7.0+cu126 - Datasets 3.6.0 - Tokenizers 0.21.1
shadowlilac/gemma3-4b-262144-emb-Q4_0-GGUF
shadowlilac
2025-06-01T10:03:01Z
0
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "base_model:shadowlilac/gemma3-4b-262144-emb", "base_model:quantized:shadowlilac/gemma3-4b-262144-emb", "endpoints_compatible", "region:us" ]
null
2025-06-01T10:02:44Z
--- base_model: shadowlilac/gemma3-4b-262144-emb tags: - llama-cpp - gguf-my-repo --- # shadowlilac/gemma3-4b-262144-emb-Q4_0-GGUF This model was converted to GGUF format from [`shadowlilac/gemma3-4b-262144-emb`](https://huggingface.co/shadowlilac/gemma3-4b-262144-emb) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/shadowlilac/gemma3-4b-262144-emb) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo shadowlilac/gemma3-4b-262144-emb-Q4_0-GGUF --hf-file gemma3-4b-262144-emb-q4_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo shadowlilac/gemma3-4b-262144-emb-Q4_0-GGUF --hf-file gemma3-4b-262144-emb-q4_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo shadowlilac/gemma3-4b-262144-emb-Q4_0-GGUF --hf-file gemma3-4b-262144-emb-q4_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo shadowlilac/gemma3-4b-262144-emb-Q4_0-GGUF --hf-file gemma3-4b-262144-emb-q4_0.gguf -c 2048 ```
veddhanth/lora-trained-xl
veddhanth
2025-06-01T09:58:22Z
0
0
diffusers
[ "diffusers", "tensorboard", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2025-06-01T09:33:45Z
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 library_name: diffusers license: openrail++ instance_prompt: a realistic portrait of sks face widget: [] tags: - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - veddhanth/lora-trained-xl <Gallery /> ## Model description These are veddhanth/lora-trained-xl LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: True. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a realistic portrait of sks face to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](veddhanth/lora-trained-xl/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
CapyKing10/JARVIS
CapyKing10
2025-06-01T09:57:58Z
0
0
null
[ "en", "dataset:openbmb/Ultra-FineWeb", "base_model:deepseek-ai/DeepSeek-R1-0528", "base_model:finetune:deepseek-ai/DeepSeek-R1-0528", "region:us" ]
null
2025-06-01T09:55:52Z
--- datasets: - openbmb/Ultra-FineWeb language: - en metrics: - character base_model: - deepseek-ai/DeepSeek-R1-0528 ---
hexuan21/gemma3-1b-it_eqa_lora_sft
hexuan21
2025-06-01T09:57:45Z
0
0
peft
[ "peft", "safetensors", "llama-factory", "lora", "generated_from_trainer", "base_model:google/gemma-3-1b-it", "base_model:adapter:google/gemma-3-1b-it", "license:other", "region:us" ]
null
2025-06-01T09:57:34Z
--- library_name: peft license: other base_model: google/gemma-3-1b-it tags: - llama-factory - lora - generated_from_trainer model-index: - name: gemma3-1b-it_eqa_lora_sft results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gemma3-1b-it_eqa_lora_sft This model is a fine-tuned version of [google/gemma-3-1b-it](https://huggingface.co/google/gemma-3-1b-it) on the energy_qa_alpaca dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 8 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1.0 ### Training results ### Framework versions - PEFT 0.15.2 - Transformers 4.52.4 - Pytorch 2.7.0+cu126 - Datasets 3.6.0 - Tokenizers 0.21.1
arialiefreviews/Arialief
arialiefreviews
2025-06-01T09:54:11Z
0
0
null
[ "region:us" ]
null
2025-06-01T09:33:09Z
# Arialief Nerve Function Support - Arialief US Review Order Now **[Arialief](https://www.diginear.com/2PGQH1JJ/ZFSHTZX/)** In today’s fast-paced world, many individuals face daily challenges due to discomfort and aches, particularly those associated with sciatica. Sciatic nerve pain is not just a physical discomfort; it can significantly affect one’s quality of life, limiting mobility and overall well-being. Many products flood the market in pursuit of relief, but few stand as prominently as Arialief. This supplement addresses sciatic nerve discomfort effectively, allowing individuals to reclaim their lives and enjoy everyday activities without pain.With a unique blend of natural ingredients, Arialief aims to provide both rapid relief and long-term support for nerve health, making it a compelling choice for those seeking help. In this review, we will explore what Arialief is, how it works, its key ingredients, benefits, pricing, potential side effects, and where to buy it. By the end of this review, you should comprehensively understand whether Arialief is the right fit for your needs. ## **[➦➦Bumper Offers Sale has Started, Hurry Before It's Over!](https://www.diginear.com/2PGQH1JJ/ZFSHTZX/)** ## What is Arialief ? Arialief is a specially formulated dietary supplement designed to support nerve health and provide relief from discomfort associated with various conditions. This product leverages the power of nature, utilizing a unique blend of plant-based ingredients and essential nutrients to promote overall comfort and enhance mobility. What sets this supplement apart is its commitment to using non-GMO, gluten-free, and natural ingredients, making it suitable for a wide range of individuals looking for a gentle yet effective solution to their discomfort. Manufactured in a state-of-the-art FDA-registered facility, Arialief adheres to strict quality control standards, ensuring that every bottle contains the highest quality ingredients. Beyond pain relief, this supplement also aims to enhance energy levels and sharpen mental clarity, allowing users to feel their best throughout the day. By addressing the root causes of discomfort, Arialief offers a holistic approach to health that resonates with those seeking long-term solutions rather than temporary fixes. ## Arialief Ingredients : The ingredients in **[Arialief](https://www.diginear.com/2PGQH1JJ/ZFSHTZX/)** work in different ways to promote long-lasting relief from nerve discomfort. Arialief Many of those ingredients go to work quickly while also supporting long-term nerve health.Here are all six active ingredients in Arialief and how they work, according to the manufacturer : Palmitoylethanolamide (PEA) : Primepalm is a proprietary formula designed specifically for sciatic nerve pain. Arialief appears to be the first major supplement made using the formula. Oddly, the makers of Arialief provide limited information about what’s inside Primepalm, and the ingredient doesn’t actually appear to be listed on the label. However, it appears to be a proprietary form of palmitoylethanolamide (PEA). ALA (Alpha Lipoic Acid) : Arialief contains alpha lipoic acid, an antioxidant linked to nerve cell health, inflammation relief, and overall nerve discomfort relief. Alpha lipoic acid is known as a master antioxidant because it targets inflammation throughout the body..ALA is a powerful antioxidant known for its ability to combat oxidative stress, a major contributor to nerve damage and pain. Studies have shown that ALA can help improve symptoms related to nerve damage, particularly in diabetic patients. Turmeric : Turmeric is one of the best-known antioxidant plant extracts available today. It works because it contains an antioxidant molecule called curcumin. Curcumin can promote the healing of nerve tissue while also helping with overall inflammation, according to the manufacturer. The turmeric in Arialief is standardized to contain 95% curcuminoids (like curcumin) by weight, making it an ultra-concentrated form of turmeric. Magnesium Glycinate : Magnesium is vital for muscle relaxation and nerve function. This specific form of magnesium is highly absorbable, helping to relieve muscle tension that can exacerbate nerve pain. CoQ10 : Coenzyme Q10 is an antioxidant molecule naturally found in certain plants and vegetables. Some take CoQ10 supplements daily for anti-aging. Others take CoQ10 for joint pain relief or sciatic nerve pain. CoQ10 is the last listed ingredient in the Arialief proprietary formula, which means it’s the smallest ingredient by weight. However, you don’t need a lot of CoQ10 to exhibit active effects. Arialief delivers CoQ10 in the form of ubiquinone, a version that’s easy for your body to absorb. L-Carnitine : This amino acid is essential for energy production within nerve cells. L-carnitine helps transport fatty acids into the mitochondria, where they are converted into energy.dequate energy levels are crucial for nerve function, and many **[Arialief](https://www.diginear.com/2PGQH1JJ/ZFSHTZX/)** joint pain supplement users note an increase in their overall energy and vitality. ## How Does Arialief Work ? Arialief works by delivering a blend of natural ingredients linked to nerve health, nerve pain relief, and overall health. One of the most unique ingredients in Arialief is Primepalm, a proprietary formula designed specifically for sciatic nerve discomfort. The formula also contains antioxidant molecules like alpha lipoic acid, plant extracts like turmeric and butcher’s broom, and nutrients like L-carnitine and magnesium. ## **[➦➦Bumper Offers Sale has Started, Hurry Before It's Over!](https://www.diginear.com/2PGQH1JJ/ZFSHTZX/)**
mradermacher/Qwen2.5-7B-sft-dpo-10k-KTO-GGUF
mradermacher
2025-06-01T09:54:09Z
0
0
transformers
[ "transformers", "gguf", "generated_from_trainer", "trl", "kto", "en", "base_model:AmberYifan/Qwen2.5-7B-sft-dpo-10k-KTO", "base_model:quantized:AmberYifan/Qwen2.5-7B-sft-dpo-10k-KTO", "endpoints_compatible", "region:us", "conversational" ]
null
2025-06-01T09:18:12Z
--- base_model: AmberYifan/Qwen2.5-7B-sft-dpo-10k-KTO language: - en library_name: transformers model_name: Qwen2.5-7B-sft-dpo-10k-KTO quantized_by: mradermacher tags: - generated_from_trainer - trl - kto --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/AmberYifan/Qwen2.5-7B-sft-dpo-10k-KTO <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-sft-dpo-10k-KTO-GGUF/resolve/main/Qwen2.5-7B-sft-dpo-10k-KTO.Q2_K.gguf) | Q2_K | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-sft-dpo-10k-KTO-GGUF/resolve/main/Qwen2.5-7B-sft-dpo-10k-KTO.Q3_K_S.gguf) | Q3_K_S | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-sft-dpo-10k-KTO-GGUF/resolve/main/Qwen2.5-7B-sft-dpo-10k-KTO.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-sft-dpo-10k-KTO-GGUF/resolve/main/Qwen2.5-7B-sft-dpo-10k-KTO.Q3_K_L.gguf) | Q3_K_L | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-sft-dpo-10k-KTO-GGUF/resolve/main/Qwen2.5-7B-sft-dpo-10k-KTO.IQ4_XS.gguf) | IQ4_XS | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-sft-dpo-10k-KTO-GGUF/resolve/main/Qwen2.5-7B-sft-dpo-10k-KTO.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-sft-dpo-10k-KTO-GGUF/resolve/main/Qwen2.5-7B-sft-dpo-10k-KTO.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-sft-dpo-10k-KTO-GGUF/resolve/main/Qwen2.5-7B-sft-dpo-10k-KTO.Q5_K_S.gguf) | Q5_K_S | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-sft-dpo-10k-KTO-GGUF/resolve/main/Qwen2.5-7B-sft-dpo-10k-KTO.Q5_K_M.gguf) | Q5_K_M | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-sft-dpo-10k-KTO-GGUF/resolve/main/Qwen2.5-7B-sft-dpo-10k-KTO.Q6_K.gguf) | Q6_K | 6.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-sft-dpo-10k-KTO-GGUF/resolve/main/Qwen2.5-7B-sft-dpo-10k-KTO.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-sft-dpo-10k-KTO-GGUF/resolve/main/Qwen2.5-7B-sft-dpo-10k-KTO.f16.gguf) | f16 | 15.3 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
MaxPowerUnlimited/vit-superhero-villain
MaxPowerUnlimited
2025-06-01T09:52:56Z
38
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:google/vit-base-patch16-224", "base_model:finetune:google/vit-base-patch16-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2025-05-31T07:07:59Z
--- library_name: transformers license: apache-2.0 base_model: google/vit-base-patch16-224 tags: - image-classification - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: vit-superhero-villain results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.736318407960199 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-superhero-villain This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.2902 - Accuracy: 0.7363 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 26 | 1.4140 | 0.735 | | 1.2713 | 2.0 | 52 | 1.3908 | 0.735 | | 1.2713 | 3.0 | 78 | 1.3709 | 0.735 | | 1.2028 | 4.0 | 104 | 1.3544 | 0.74 | | 1.2028 | 5.0 | 130 | 1.3359 | 0.74 | | 1.1776 | 6.0 | 156 | 1.3219 | 0.74 | | 1.1776 | 7.0 | 182 | 1.3078 | 0.74 | | 1.1515 | 8.0 | 208 | 1.2952 | 0.74 | | 1.1515 | 9.0 | 234 | 1.2841 | 0.74 | | 1.1519 | 10.0 | 260 | 1.2733 | 0.745 | | 1.1519 | 11.0 | 286 | 1.2637 | 0.745 | | 1.107 | 12.0 | 312 | 1.2557 | 0.745 | | 1.107 | 13.0 | 338 | 1.2495 | 0.745 | | 1.0611 | 14.0 | 364 | 1.2441 | 0.745 | | 1.0611 | 15.0 | 390 | 1.2388 | 0.745 | | 1.0748 | 16.0 | 416 | 1.2347 | 0.745 | | 1.0748 | 17.0 | 442 | 1.2317 | 0.745 | | 1.0563 | 18.0 | 468 | 1.2294 | 0.745 | | 1.0563 | 19.0 | 494 | 1.2280 | 0.745 | | 1.062 | 20.0 | 520 | 1.2277 | 0.745 | ### Framework versions - Transformers 4.50.3 - Pytorch 2.5.1+cu121 - Datasets 3.5.0 - Tokenizers 0.21.1
Dingyun-Huang/oe-roberta-base-squad1
Dingyun-Huang
2025-06-01T09:51:40Z
27
0
transformers
[ "transformers", "pytorch", "safetensors", "roberta", "question-answering", "optoelectronics", "science", "data-mining", "en", "dataset:rajpurkar/squad", "base_model:FacebookAI/roberta-base", "base_model:finetune:FacebookAI/roberta-base", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2024-03-20T21:13:47Z
--- license: mit datasets: - rajpurkar/squad language: - en metrics: - exact_match: 0.837 - f1: 0.911 - squad base_model: - FacebookAI/roberta-base pipeline_tag: question-answering library_name: transformers tags: - optoelectronics - science - data-mining --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> The OE-RoBERTa model is domain adapted from RoBERTa-base over research literature in optoelectronics. The adapted model is then fine-tuned on SQuAD v1.1 for question answering capabilities. ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Language(s) (NLP):** English - **Adapted from model:** FacebookAI/roberta-base ### Model Sources <!-- Provide the basic links for the model. --> - **Repository:** [OptoelectronicsLM-codebase (GitHub)](https://github.com/Dingyun-Huang/OptoelectronicsLM-codebase) - **Paper:** [ Cost-Efficient Domain-Adaptive Pretraining of Language Models for Optoelectronics Applications](https://pubs.acs.org/doi/10.1021/acs.jcim.4c02029) ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ```python # Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("question-answering", model="Dingyun-Huang/oe-roberta-base-squad1") ``` ## Citation <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** ```bibtex @article{doi:10.1021/acs.jcim.4c02029, author = {Huang, Dingyun and Cole, Jacqueline M.}, title = {Cost-Efficient Domain-Adaptive Pretraining of Language Models for Optoelectronics Applications}, journal = {Journal of Chemical Information and Modeling}, volume = {65}, number = {5}, pages = {2476-2486}, year = {2025}, doi = {10.1021/acs.jcim.4c02029}, note ={PMID: 39933074}, URL = { https://doi.org/10.1021/acs.jcim.4c02029 }, eprint = { https://doi.org/10.1021/acs.jcim.4c02029 } } ```
zjrwtxtechstudio/PLMAS_R1
zjrwtxtechstudio
2025-06-01T09:47:35Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-01T09:47:04Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
New-Viral-BJP-Neta-Viral-Video/FULL.VIDEO.BJP.Neta.Viral.Video.Tutorial.Official
New-Viral-BJP-Neta-Viral-Video
2025-06-01T09:46:41Z
0
0
null
[ "region:us" ]
null
2025-06-01T09:46:25Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
IntMeGroup/DFBench_qwen25_38787
IntMeGroup
2025-06-01T09:46:00Z
0
0
null
[ "safetensors", "qwen2_5_vl", "license:apache-2.0", "region:us" ]
null
2025-06-01T08:02:11Z
--- license: apache-2.0 ---
karunmatin/gemma-product-description
karunmatin
2025-06-01T09:44:52Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:google/gemma-3-4b-pt", "base_model:finetune:google/gemma-3-4b-pt", "endpoints_compatible", "region:us" ]
null
2025-04-06T08:36:37Z
--- base_model: google/gemma-3-4b-pt library_name: transformers model_name: gemma-product-description tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for gemma-product-description This model is a fine-tuned version of [google/gemma-3-4b-pt](https://huggingface.co/google/gemma-3-4b-pt). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="karunmatin/gemma-product-description", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.15.2 - Transformers: 4.50.0.dev0 - Pytorch: 2.6.0 - Datasets: 3.3.2 - Tokenizers: 0.21.0 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
sid22669/ppo-LunarLander-v2
sid22669
2025-06-01T09:41:37Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-06-01T09:41:19Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 242.84 +/- 18.40 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
jessO3/ppo-Huggy
jessO3
2025-06-01T09:38:32Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2025-06-01T09:38:25Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: jessO3/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
dimasik2987/a59a8d5c-79db-4348-a4e8-6db2c6300717
dimasik2987
2025-06-01T09:37:57Z
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:lmsys/vicuna-7b-v1.3", "base_model:adapter:lmsys/vicuna-7b-v1.3", "4-bit", "bitsandbytes", "region:us" ]
null
2025-06-01T08:36:17Z
--- library_name: peft base_model: lmsys/vicuna-7b-v1.3 tags: - axolotl - generated_from_trainer model-index: - name: a59a8d5c-79db-4348-a4e8-6db2c6300717 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml absolute_data_files: false adapter: lora base_model: lmsys/vicuna-7b-v1.3 bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - 3343fb7e5c035ce1_train_data.json ds_type: json format: custom path: /workspace/input_data/ type: field_instruction: instruct field_output: output format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null dpo: beta: 0.1 enabled: true group_by_length: false rank_loss: true reference_model: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 2 gradient_checkpointing: true gradient_clipping: 0.85 group_by_length: false hub_model_id: dimasik2987/a59a8d5c-79db-4348-a4e8-6db2c6300717 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-06 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.1 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_steps: 500 micro_batch_size: 12 mixed_precision: bf16 mlflow_experiment_name: /tmp/3343fb7e5c035ce1_train_data.json model_type: AutoModelForCausalLM num_epochs: 2 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: be68d36e-ec6d-457b-ad70-a4e6e098c24d wandb_project: s56-7 wandb_run: your_name wandb_runid: be68d36e-ec6d-457b-ad70-a4e6e098c24d warmup_steps: 50 weight_decay: 0.02 xformers_attention: true ``` </details><br> # a59a8d5c-79db-4348-a4e8-6db2c6300717 This model is a fine-tuned version of [lmsys/vicuna-7b-v1.3](https://huggingface.co/lmsys/vicuna-7b-v1.3) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1658 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 24 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 50 - training_steps: 500 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.4918 | 0.0002 | 1 | 1.4376 | | 1.1683 | 0.0437 | 250 | 1.1855 | | 0.8027 | 0.0874 | 500 | 1.1658 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Seanwang1221/LiuShishi_FLUX
Seanwang1221
2025-06-01T09:37:15Z
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "region:us" ]
text-to-image
2025-06-01T09:37:00Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: >- LSS,Half body portrait of Hepburn a woman wearing a white strapless dress against a blue and white background with a tiara on her head, long white glove, with her eyes looking directly at the viewer and her red lips slightly parted. Her hair is styled in tightly slicked-back bun, and her makeup is subtle and natural. output: url: images/Flux_image_00026_.png - text: >- LSS,The image shows a young woman wearing a traditional Chinese Hanfu dress. She is holding a large, ornate parasol in her right hand and is looking directly at the camera with a serious expression on her face. The parasol is made of bamboo and has intricate designs and patterns on it. The woman is also wearing a black headpiece with pink and white flowers on top. The background appears to be a traditional building with a red roof. output: url: images/Flux_image_00029_.png - text: >- LSS,In a dimly lit, intimate studio setting, a captivating woman with cascading brown hair, her eyes sparkling with an enigmatic allure, is posed seductively with parted lips and a hint of a smile. She wears a one-of-a-kind, red silk shirt adorned with intricate gold embroidery, the fabric clinging to her curves in a way that exudes confidence and sensuality. The background is stark and simple, allowing her vibrant outfit and expressive features to take center stage. The camera captures her from a slightly low angle, emphasizing the dramatic effect of her gaze locked onto the viewer's soul as she leans ever so slightly forward, creating an intimate connection that transcends the boundaries of the photograph. output: url: images/Flux_image_00014_.png base_model: black-forest-labs/FLUX.1-dev instance_prompt: LSS --- # Liu Shishi 刘诗诗 CN Actress FLUX <Gallery /> ## Trigger words You should use `LSS` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/Seanwang1221/LiuShishi_FLUX/tree/main) them in the Files & versions tab.
FormlessAI/bc0e4886-0591-4caa-9658-c656512aec0d
FormlessAI
2025-06-01T09:35:56Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "trl", "sft", "conversational", "base_model:unsloth/Llama-3.2-1B-Instruct", "base_model:finetune:unsloth/Llama-3.2-1B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-01T03:16:46Z
--- base_model: unsloth/Llama-3.2-1B-Instruct library_name: transformers model_name: bc0e4886-0591-4caa-9658-c656512aec0d tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for bc0e4886-0591-4caa-9658-c656512aec0d This model is a fine-tuned version of [unsloth/Llama-3.2-1B-Instruct](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="FormlessAI/bc0e4886-0591-4caa-9658-c656512aec0d", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/phoenix-formless/Gradients/runs/ghg9u2ua) This model was trained with SFT. ### Framework versions - TRL: 0.18.0 - Transformers: 4.52.3 - Pytorch: 2.7.0+cu128 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
KBhandari11/llama_block_1_english_russian_proverbs_Community
KBhandari11
2025-06-01T09:35:31Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "model: llama", "repo_name: llama_block_1_english_russian_proverbs_Community", "file_name: llama_block_1_english_russian_proverbs_Community_5000_5.pt", "base_model: meta-llama/Llama-2-7b-hf", "pruning_style: block", "community: 1", "pruning_ratio: 20", "dataset_label: english_russian_proverbs", "sparsity_ratio: 20", "dataset: ['tasksource/bigbench', 'english_russian_proverbs']", "finetune: Community", "modules_size: 27", "modules: ['10_attn.v', '12_attn.q', '12_attn.v', '14_attn.q', '16_attn.k', '17_attn.q', '17_attn.v', '18_attn.o', '20_attn.k', '21_attn.o', '22_attn.k', '22_attn.o', '24_attn.q', '26_attn.q', '28_attn.k', '28_attn.q', '28_attn.v', '29_attn.q', '30_attn.k', '3_attn.k', '3_attn.v', '4_attn.o', '6_attn.k', '6_attn.v', '7_attn.o', '9_attn.k', '9_attn.v']", "rank: 2", "tags: ['model: llama', 'repo_name: llama_block_1_english_russian_proverbs_Community', 'file_name: llama_block_1_english_russian_proverbs_Community_5000_5.pt', 'base_model: meta-llama/Llama-2-7b-hf', 'pruning_style: block', 'community: 1', 'pruning_ratio: 20', 'dataset_label: english_russian_proverbs', 'sparsity_ratio: 20', \"dataset: ['tasksource/bigbench', 'english_russian_proverbs']\", 'finetune: Community', 'modules_size: 27', \"modules: ['10_attn.v', '12_attn.q', '12_attn.v', '14_attn.q', '16_attn.k', '17_attn.q', '17_attn.v', '18_attn.o', '20_attn.k', '21_attn.o', '22_attn.k', '22_attn.o', '24_attn.q', '26_attn.q', '28_attn.k', '28_attn.q', '28_attn.v', '29_attn.q', '30_attn.k', '3_attn.k', '3_attn.v', '4_attn.o', '6_attn.k', '6_attn.v', '7_attn.o', '9_attn.k', '9_attn.v']\", 'rank: 2']", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-01T09:30:30Z
--- library_name: transformers tags: - 'model: llama' - 'repo_name: llama_block_1_english_russian_proverbs_Community' - 'file_name: llama_block_1_english_russian_proverbs_Community_5000_5.pt' - 'base_model: meta-llama/Llama-2-7b-hf' - 'pruning_style: block' - 'community: 1' - 'pruning_ratio: 20' - 'dataset_label: english_russian_proverbs' - 'sparsity_ratio: 20' - 'dataset: [''tasksource/bigbench'', ''english_russian_proverbs'']' - 'finetune: Community' - 'modules_size: 27' - 'modules: [''10_attn.v'', ''12_attn.q'', ''12_attn.v'', ''14_attn.q'', ''16_attn.k'', ''17_attn.q'', ''17_attn.v'', ''18_attn.o'', ''20_attn.k'', ''21_attn.o'', ''22_attn.k'', ''22_attn.o'', ''24_attn.q'', ''26_attn.q'', ''28_attn.k'', ''28_attn.q'', ''28_attn.v'', ''29_attn.q'', ''30_attn.k'', ''3_attn.k'', ''3_attn.v'', ''4_attn.o'', ''6_attn.k'', ''6_attn.v'', ''7_attn.o'', ''9_attn.k'', ''9_attn.v'']' - 'rank: 2' - 'tags: [''model: llama'', ''repo_name: llama_block_1_english_russian_proverbs_Community'', ''file_name: llama_block_1_english_russian_proverbs_Community_5000_5.pt'', ''base_model: meta-llama/Llama-2-7b-hf'', ''pruning_style: block'', ''community: 1'', ''pruning_ratio: 20'', ''dataset_label: english_russian_proverbs'', ''sparsity_ratio: 20'', "dataset: [''tasksource/bigbench'', ''english_russian_proverbs'']", ''finetune: Community'', ''modules_size: 27'', "modules: [''10_attn.v'', ''12_attn.q'', ''12_attn.v'', ''14_attn.q'', ''16_attn.k'', ''17_attn.q'', ''17_attn.v'', ''18_attn.o'', ''20_attn.k'', ''21_attn.o'', ''22_attn.k'', ''22_attn.o'', ''24_attn.q'', ''26_attn.q'', ''28_attn.k'', ''28_attn.q'', ''28_attn.v'', ''29_attn.q'', ''30_attn.k'', ''3_attn.k'', ''3_attn.v'', ''4_attn.o'', ''6_attn.k'', ''6_attn.v'', ''7_attn.o'', ''9_attn.k'', ''9_attn.v'']", ''rank: 2'']' --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Seanwang1221/SongJia_FLUX
Seanwang1221
2025-06-01T09:32:15Z
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "region:us" ]
text-to-image
2025-06-01T09:31:36Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: >- SJ, white dress,looking at viewer, floral print, depth of field, night cityscape, (1girl:1), long hair, ulzzang-6500v1.1, (original: 1.2), (realistic: 1.3) , beautiful girl with beautiful details, extremely detailed eyes and face, eyes with beautiful details, absurd, incredibly absurd, huge file size, ultra detail, high resolution, ultra detailed, best quality, masterpiece, illustration, ultra detailed and beautiful, ultra detailed, CG, unity, 8k wallpaper, amazing, fine Detail, masterpiece, top quality, official art, extremely detailed CG unity 8k wallpaper, cinematic lighting, (perfect shiny skin:0.6), slim and smooth lines, (floating), (small breasts:1), earrings , output: url: images/Liblib_01400_.png - text: >- SJ, In a dimly lit, vintage-inspired boudoir, the captivating SJ is poised against a velvet-draped chaise lounge, her cascading raven tresses framing a radiant smile that lights up the room. Her eyes twinkle with an enchanting allure as they gaze into the distance, a pair of exquisite emerald earrings adorning her lobes. A smoky-eye makeup look and bold red lipstick accentuate her stunning features. Her fingers playfully trace the edge of a worn, feather-trimmed pillow, her delicate hand adorned with intricate gold bracelets. The camera captures this intimate moment from a low angle, focusing on her expressive eyes and the subtle glow emanating from within, creating an ethereal and dreamy atmosphere that speaks volumes about her innate grace and charisma. output: url: images/Liblib_01386_.png - text: >- SJ, In a surrealist noir landscape, under the soft glow of a neon-lit cityscape, SJ stands in a dramatic pose, her long, cascading brown hair obscuring one piercing blue eye as she turns her head slightly to the side. Her lips are parted in a mysterious smile, revealing perfectly white teeth adorned with a glittering silver tooth gem. A single, intricate necklace made of obsidian and gold hangs around her neck, catching the dim light and casting eerie shadows on her porcelain skin. The camera is positioned at eye level, focusing closely on her face, capturing every detail of her unique features, while the rest of the scene remains blurred and indistinct in the background, adding to the dreamlike quality of the image. A halo of light illuminates her from behind, casting an ethereal glow around her figure, enhancing the enigmatic aura that surrounds her. output: url: images/Liblib_01399_.png base_model: black-forest-labs/FLUX.1-dev instance_prompt: SJ --- # Song Jia 宋佳 CN Actress FLUX <Gallery /> ## Trigger words You should use `SJ` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/Seanwang1221/SongJia_FLUX/tree/main) them in the Files & versions tab.
RyyDer/deberta-tweetqa
RyyDer
2025-06-01T09:24:59Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "deberta-v2", "question-answering", "generated_from_trainer", "base_model:deepset/deberta-v3-large-squad2", "base_model:finetune:deepset/deberta-v3-large-squad2", "license:cc-by-4.0", "endpoints_compatible", "region:us" ]
question-answering
2025-06-01T09:22:08Z
--- library_name: transformers license: cc-by-4.0 base_model: deepset/deberta-v3-large-squad2 tags: - generated_from_trainer model-index: - name: deberta-tweetqa results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-tweetqa This model is a fine-tuned version of [deepset/deberta-v3-large-squad2](https://huggingface.co/deepset/deberta-v3-large-squad2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.1955 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 2 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.4798 | 1.0 | 1 | 2.5057 | | 1.2709 | 2.0 | 2 | 2.1955 | ### Framework versions - Transformers 4.52.2 - Pytorch 2.6.0+cu124 - Datasets 2.14.4 - Tokenizers 0.21.1
Thamirawaran/Gemma-3-4B-Unsloth
Thamirawaran
2025-06-01T09:24:13Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "gemma3", "trl", "en", "base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit", "base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-01T08:22:55Z
--- base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - gemma3 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Thamirawaran - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
KBhandari11/llama_block_1_language_identification_All
KBhandari11
2025-06-01T09:23:29Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "model: llama", "repo_name: llama_block_1_language_identification_All", "file_name: llama_block_1_language_identification_All_5000_5.pt", "base_model: meta-llama/Llama-2-7b-hf", "pruning_style: block", "community: 1", "pruning_ratio: 20", "dataset_label: language_identification", "sparsity_ratio: 20", "dataset: ['tasksource/bigbench', 'language_identification']", "finetune: All", "modules_size: 27", "modules: ['10_attn.v', '12_attn.q', '12_attn.v', '14_attn.q', '16_attn.k', '17_attn.q', '17_attn.v', '18_attn.o', '20_attn.k', '21_attn.o', '22_attn.k', '22_attn.o', '24_attn.q', '26_attn.q', '28_attn.k', '28_attn.q', '28_attn.v', '29_attn.q', '30_attn.k', '3_attn.k', '3_attn.v', '4_attn.o', '6_attn.k', '6_attn.v', '7_attn.o', '9_attn.k', '9_attn.v']", "rank: 1", "tags: ['model: llama', 'repo_name: llama_block_1_language_identification_All', 'file_name: llama_block_1_language_identification_All_5000_5.pt', 'base_model: meta-llama/Llama-2-7b-hf', 'pruning_style: block', 'community: 1', 'pruning_ratio: 20', 'dataset_label: language_identification', 'sparsity_ratio: 20', \"dataset: ['tasksource/bigbench', 'language_identification']\", 'finetune: All', 'modules_size: 27', \"modules: ['10_attn.v', '12_attn.q', '12_attn.v', '14_attn.q', '16_attn.k', '17_attn.q', '17_attn.v', '18_attn.o', '20_attn.k', '21_attn.o', '22_attn.k', '22_attn.o', '24_attn.q', '26_attn.q', '28_attn.k', '28_attn.q', '28_attn.v', '29_attn.q', '30_attn.k', '3_attn.k', '3_attn.v', '4_attn.o', '6_attn.k', '6_attn.v', '7_attn.o', '9_attn.k', '9_attn.v']\", 'rank: 1']", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-01T09:18:12Z
--- library_name: transformers tags: - 'model: llama' - 'repo_name: llama_block_1_language_identification_All' - 'file_name: llama_block_1_language_identification_All_5000_5.pt' - 'base_model: meta-llama/Llama-2-7b-hf' - 'pruning_style: block' - 'community: 1' - 'pruning_ratio: 20' - 'dataset_label: language_identification' - 'sparsity_ratio: 20' - 'dataset: [''tasksource/bigbench'', ''language_identification'']' - 'finetune: All' - 'modules_size: 27' - 'modules: [''10_attn.v'', ''12_attn.q'', ''12_attn.v'', ''14_attn.q'', ''16_attn.k'', ''17_attn.q'', ''17_attn.v'', ''18_attn.o'', ''20_attn.k'', ''21_attn.o'', ''22_attn.k'', ''22_attn.o'', ''24_attn.q'', ''26_attn.q'', ''28_attn.k'', ''28_attn.q'', ''28_attn.v'', ''29_attn.q'', ''30_attn.k'', ''3_attn.k'', ''3_attn.v'', ''4_attn.o'', ''6_attn.k'', ''6_attn.v'', ''7_attn.o'', ''9_attn.k'', ''9_attn.v'']' - 'rank: 1' - 'tags: [''model: llama'', ''repo_name: llama_block_1_language_identification_All'', ''file_name: llama_block_1_language_identification_All_5000_5.pt'', ''base_model: meta-llama/Llama-2-7b-hf'', ''pruning_style: block'', ''community: 1'', ''pruning_ratio: 20'', ''dataset_label: language_identification'', ''sparsity_ratio: 20'', "dataset: [''tasksource/bigbench'', ''language_identification'']", ''finetune: All'', ''modules_size: 27'', "modules: [''10_attn.v'', ''12_attn.q'', ''12_attn.v'', ''14_attn.q'', ''16_attn.k'', ''17_attn.q'', ''17_attn.v'', ''18_attn.o'', ''20_attn.k'', ''21_attn.o'', ''22_attn.k'', ''22_attn.o'', ''24_attn.q'', ''26_attn.q'', ''28_attn.k'', ''28_attn.q'', ''28_attn.v'', ''29_attn.q'', ''30_attn.k'', ''3_attn.k'', ''3_attn.v'', ''4_attn.o'', ''6_attn.k'', ''6_attn.v'', ''7_attn.o'', ''9_attn.k'', ''9_attn.v'']", ''rank: 1'']' --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/UGround-V1-72B-GGUF
mradermacher
2025-06-01T09:20:32Z
27
0
transformers
[ "transformers", "gguf", "multimodal", "en", "base_model:osunlp/UGround-V1-72B", "base_model:quantized:osunlp/UGround-V1-72B", "license:other", "endpoints_compatible", "region:us", "conversational" ]
null
2025-01-14T22:41:26Z
--- base_model: osunlp/UGround-V1-72B language: - en library_name: transformers license: other license_link: https://huggingface.co/Qwen/Qwen2-VL-72B-Instruct/blob/main/LICENSE license_name: tongyi-qianwen quantized_by: mradermacher tags: - multimodal --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> static quants of https://huggingface.co/osunlp/UGround-V1-72B <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/UGround-V1-72B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/UGround-V1-72B-GGUF/resolve/main/UGround-V1-72B.mmproj-fp16.gguf) | mmproj-fp16 | 1.5 | multi-modal supplement | | [GGUF](https://huggingface.co/mradermacher/UGround-V1-72B-GGUF/resolve/main/UGround-V1-72B.Q2_K.gguf) | Q2_K | 29.9 | | | [GGUF](https://huggingface.co/mradermacher/UGround-V1-72B-GGUF/resolve/main/UGround-V1-72B.Q3_K_S.gguf) | Q3_K_S | 34.6 | | | [GGUF](https://huggingface.co/mradermacher/UGround-V1-72B-GGUF/resolve/main/UGround-V1-72B.Q3_K_M.gguf) | Q3_K_M | 37.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/UGround-V1-72B-GGUF/resolve/main/UGround-V1-72B.Q3_K_L.gguf) | Q3_K_L | 39.6 | | | [GGUF](https://huggingface.co/mradermacher/UGround-V1-72B-GGUF/resolve/main/UGround-V1-72B.IQ4_XS.gguf) | IQ4_XS | 40.3 | | | [GGUF](https://huggingface.co/mradermacher/UGround-V1-72B-GGUF/resolve/main/UGround-V1-72B.Q4_K_S.gguf) | Q4_K_S | 44.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/UGround-V1-72B-GGUF/resolve/main/UGround-V1-72B.Q4_K_M.gguf) | Q4_K_M | 47.5 | fast, recommended | | [PART 1](https://huggingface.co/mradermacher/UGround-V1-72B-GGUF/resolve/main/UGround-V1-72B.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/UGround-V1-72B-GGUF/resolve/main/UGround-V1-72B.Q5_K_S.gguf.part2of2) | Q5_K_S | 51.5 | | | [PART 1](https://huggingface.co/mradermacher/UGround-V1-72B-GGUF/resolve/main/UGround-V1-72B.Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/UGround-V1-72B-GGUF/resolve/main/UGround-V1-72B.Q5_K_M.gguf.part2of2) | Q5_K_M | 54.5 | | | [PART 1](https://huggingface.co/mradermacher/UGround-V1-72B-GGUF/resolve/main/UGround-V1-72B.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/UGround-V1-72B-GGUF/resolve/main/UGround-V1-72B.Q6_K.gguf.part2of2) | Q6_K | 64.4 | very good quality | | [PART 1](https://huggingface.co/mradermacher/UGround-V1-72B-GGUF/resolve/main/UGround-V1-72B.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/UGround-V1-72B-GGUF/resolve/main/UGround-V1-72B.Q8_0.gguf.part2of2) | Q8_0 | 77.4 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
Varinder2110/7f59bf0b-1fa8-4d60-9cae-598a676aa436
Varinder2110
2025-06-01T09:20:29Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-01T09:11:37Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: TOK --- # 7F59Bf0B 1Fa8 4D60 9Cae 598A676Aa436 <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `TOK` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "TOK", "lora_weights": "https://huggingface.co/Varinder2110/7f59bf0b-1fa8-4d60-9cae-598a676aa436/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('Varinder2110/7f59bf0b-1fa8-4d60-9cae-598a676aa436', weight_name='lora.safetensors') image = pipeline('TOK').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 600 - Learning rate: 0.0004 - LoRA rank: 4 ## Contribute your own examples You can use the [community tab](https://huggingface.co/Varinder2110/7f59bf0b-1fa8-4d60-9cae-598a676aa436/discussions) to add images that show off what you’ve made with this LoRA.
Seanwang1221/BaiLu_FLUX
Seanwang1221
2025-06-01T09:20:06Z
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "region:us" ]
text-to-image
2025-06-01T09:19:53Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: >- BL, In a vibrant, sunlit courtyard framed by ancient stone walls adorned with ivy, a radiant woman poses confidently, her captivating brown eyes twinkling with mischief as she flashes a broad, gleaming-white grin. She wears an exquisite, one-shoulder dress crafted from lush, tropical flowers that seem to dance in the soft afternoon light, its delicate petals shimmering with a kaleidoscope of colors. Her hair cascades down her back in loose waves, catching the sunlight and creating a golden halo effect around her head. The camera captures her at eye level, focusing intently on her expressive face and the intricate floral pattern that swirls across the fabric of her dress, creating a stunning contrast between nature's beauty and the woman's modern elegance. output: url: images/Liblib_00671_.png - text: >- BL,The image shows a woman wearing a traditional Chinese hanfu, a green dress with intricate embroidery and a purple sash. She is standing stands in the center of the frame, framed by an array of blooming jasmine and wisteria vines that entwine around ancient stone pillars, evoking a romantic, ethereal atmosphere. The soft, warm light dances across her smooth, flawless skin, creating a serene and enchanting image that speaks volumes about the beauty, mystery, and allure of this captivating woman. output: url: images/Liblib_00683_.png - text: >- BL, In a noir-inspired scene set within the dimly lit, rain-soaked streets of 1940s San Francisco, a close-up image captures BL, a striking woman with brown hair cascading down her back, wearing an avant-garde, iridescent gown with intricate crystal jewelry adorning her neck and wrists. Her parted lips are slightly pursed in a knowing, enigmatic smile as she leans against the cold, rusted iron railing of a foggy bridge, the city lights shimmering behind her like a distant mirage, casting an ethereal glow upon her captivating visage. The camera angle is low and angled slightly upwards, emphasizing her confident and alluring demeanor as a single raindrop trickles down her cheek, adding to the mysterious and seductive ambiance of this mesmerizing portrait. output: url: images/Liblib_00667_.png base_model: black-forest-labs/FLUX.1-dev instance_prompt: BL --- # Bai Lu 白鹿 FLUX <Gallery /> ## Trigger words You should use `BL` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/Seanwang1221/BaiLu_FLUX/tree/main) them in the Files & versions tab.
mradermacher/HebQwen-json-2025-GGUF
mradermacher
2025-06-01T09:15:38Z
12
0
transformers
[ "transformers", "gguf", "en", "base_model:asafd60/HebQwen-json-2025", "base_model:quantized:asafd60/HebQwen-json-2025", "endpoints_compatible", "region:us", "conversational" ]
null
2025-01-15T14:37:14Z
--- base_model: asafd60/HebQwen-json-2025 language: - en library_name: transformers quantized_by: mradermacher tags: [] --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/asafd60/HebQwen-json-2025 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/HebQwen-json-2025-GGUF/resolve/main/HebQwen-json-2025.mmproj-fp16.gguf) | mmproj-fp16 | 1.5 | multi-modal supplement | | [GGUF](https://huggingface.co/mradermacher/HebQwen-json-2025-GGUF/resolve/main/HebQwen-json-2025.Q2_K.gguf) | Q2_K | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/HebQwen-json-2025-GGUF/resolve/main/HebQwen-json-2025.Q3_K_S.gguf) | Q3_K_S | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/HebQwen-json-2025-GGUF/resolve/main/HebQwen-json-2025.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/HebQwen-json-2025-GGUF/resolve/main/HebQwen-json-2025.Q3_K_L.gguf) | Q3_K_L | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/HebQwen-json-2025-GGUF/resolve/main/HebQwen-json-2025.IQ4_XS.gguf) | IQ4_XS | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/HebQwen-json-2025-GGUF/resolve/main/HebQwen-json-2025.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/HebQwen-json-2025-GGUF/resolve/main/HebQwen-json-2025.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/HebQwen-json-2025-GGUF/resolve/main/HebQwen-json-2025.Q5_K_S.gguf) | Q5_K_S | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/HebQwen-json-2025-GGUF/resolve/main/HebQwen-json-2025.Q5_K_M.gguf) | Q5_K_M | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/HebQwen-json-2025-GGUF/resolve/main/HebQwen-json-2025.Q6_K.gguf) | Q6_K | 6.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/HebQwen-json-2025-GGUF/resolve/main/HebQwen-json-2025.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/HebQwen-json-2025-GGUF/resolve/main/HebQwen-json-2025.f16.gguf) | f16 | 15.3 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
mradermacher/HebQwen-json-2025-meta-GGUF
mradermacher
2025-06-01T09:11:26Z
82
0
transformers
[ "transformers", "gguf", "en", "base_model:asafd60/HebQwen-json-2025-meta", "base_model:quantized:asafd60/HebQwen-json-2025-meta", "endpoints_compatible", "region:us", "conversational" ]
null
2025-01-16T13:51:20Z
--- base_model: asafd60/HebQwen-json-2025-meta language: - en library_name: transformers quantized_by: mradermacher tags: [] --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> static quants of https://huggingface.co/asafd60/HebQwen-json-2025-meta <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/HebQwen-json-2025-meta-GGUF/resolve/main/HebQwen-json-2025-meta.mmproj-fp16.gguf) | mmproj-fp16 | 1.5 | multi-modal supplement | | [GGUF](https://huggingface.co/mradermacher/HebQwen-json-2025-meta-GGUF/resolve/main/HebQwen-json-2025-meta.Q2_K.gguf) | Q2_K | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/HebQwen-json-2025-meta-GGUF/resolve/main/HebQwen-json-2025-meta.Q3_K_S.gguf) | Q3_K_S | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/HebQwen-json-2025-meta-GGUF/resolve/main/HebQwen-json-2025-meta.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/HebQwen-json-2025-meta-GGUF/resolve/main/HebQwen-json-2025-meta.Q3_K_L.gguf) | Q3_K_L | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/HebQwen-json-2025-meta-GGUF/resolve/main/HebQwen-json-2025-meta.IQ4_XS.gguf) | IQ4_XS | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/HebQwen-json-2025-meta-GGUF/resolve/main/HebQwen-json-2025-meta.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/HebQwen-json-2025-meta-GGUF/resolve/main/HebQwen-json-2025-meta.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/HebQwen-json-2025-meta-GGUF/resolve/main/HebQwen-json-2025-meta.Q5_K_S.gguf) | Q5_K_S | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/HebQwen-json-2025-meta-GGUF/resolve/main/HebQwen-json-2025-meta.Q5_K_M.gguf) | Q5_K_M | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/HebQwen-json-2025-meta-GGUF/resolve/main/HebQwen-json-2025-meta.Q6_K.gguf) | Q6_K | 6.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/HebQwen-json-2025-meta-GGUF/resolve/main/HebQwen-json-2025-meta.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/HebQwen-json-2025-meta-GGUF/resolve/main/HebQwen-json-2025-meta.f16.gguf) | f16 | 15.3 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
kleemyan/sportballs
kleemyan
2025-06-01T09:09:02Z
45
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:google/vit-base-patch16-224", "base_model:finetune:google/vit-base-patch16-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2025-05-28T10:36:02Z
--- library_name: transformers license: apache-2.0 base_model: google/vit-base-patch16-224 tags: - image-classification - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: sportballs results: - task: name: Image Classification type: image-classification dataset: name: kleemyan/sportballs type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.9 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sportballs This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the kleemyan/sportballs dataset. It achieves the following results on the evaluation set: - Loss: 0.3433 - Accuracy: 0.9 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 15 | 0.7388 | 0.8333 | | No log | 2.0 | 30 | 0.5470 | 0.9 | | No log | 3.0 | 45 | 0.4627 | 0.8667 | | No log | 4.0 | 60 | 0.4269 | 0.8667 | | No log | 5.0 | 75 | 0.4168 | 0.8667 | ### Framework versions - Transformers 4.52.3 - Pytorch 2.6.0+cu118 - Datasets 3.6.0 - Tokenizers 0.21.1
HitmanReborn/Fancy_C
HitmanReborn
2025-06-01T09:07:58Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-01T09:05:07Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
malomalom/mistral-lora-assignments
malomalom
2025-06-01T09:07:54Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:mistralai/Mistral-7B-Instruct-v0.1", "base_model:adapter:mistralai/Mistral-7B-Instruct-v0.1", "region:us" ]
null
2025-06-01T09:05:31Z
--- base_model: mistralai/Mistral-7B-Instruct-v0.1 library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.2
kdzd/DeepSeek-R1-Distill-Llama-8B-FinQA-RL
kdzd
2025-06-01T09:05:01Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-05-20T15:43:30Z
--- base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** kdzd - **License:** apache-2.0 - **Finetuned from model :** unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
zhoushch/unit1-ppo-LunarLander-v2
zhoushch
2025-06-01T09:04:54Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-06-01T09:03:55Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 246.98 +/- 15.51 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
KBhandari11/llama_block_0_disambiguation_qa_All
KBhandari11
2025-06-01T09:04:50Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "model: llama", "repo_name: llama_block_0_disambiguation_qa_All", "file_name: llama_block_0_disambiguation_qa_All_5000_5.pt", "base_model: meta-llama/Llama-2-7b-hf", "pruning_style: block", "community: 0", "pruning_ratio: 20", "dataset_label: disambiguation_qa", "sparsity_ratio: 20", "dataset: ['tasksource/bigbench', 'disambiguation_qa']", "finetune: All", "modules_size: 27", "modules: ['11_attn.q', '13_attn.o', '13_attn.q', '14_attn.k', '14_attn.v', '16_attn.q', '18_attn.k', '20_attn.o', '21_attn.k', '23_attn.k', '23_attn.o', '23_attn.v', '25_attn.o', '25_attn.v', '26_attn.k', '27_attn.o', '28_attn.o', '29_attn.k', '30_attn.o', '30_attn.v', '3_attn.o', '3_attn.q', '5_attn.q', '6_attn.o', '7_attn.q', '8_attn.o', '8_attn.q']", "rank: 3", "tags: ['model: llama', 'repo_name: llama_block_0_disambiguation_qa_All', 'file_name: llama_block_0_disambiguation_qa_All_5000_5.pt', 'base_model: meta-llama/Llama-2-7b-hf', 'pruning_style: block', 'community: 0', 'pruning_ratio: 20', 'dataset_label: disambiguation_qa', 'sparsity_ratio: 20', \"dataset: ['tasksource/bigbench', 'disambiguation_qa']\", 'finetune: All', 'modules_size: 27', \"modules: ['11_attn.q', '13_attn.o', '13_attn.q', '14_attn.k', '14_attn.v', '16_attn.q', '18_attn.k', '20_attn.o', '21_attn.k', '23_attn.k', '23_attn.o', '23_attn.v', '25_attn.o', '25_attn.v', '26_attn.k', '27_attn.o', '28_attn.o', '29_attn.k', '30_attn.o', '30_attn.v', '3_attn.o', '3_attn.q', '5_attn.q', '6_attn.o', '7_attn.q', '8_attn.o', '8_attn.q']\", 'rank: 3']", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-01T08:59:23Z
--- library_name: transformers tags: - 'model: llama' - 'repo_name: llama_block_0_disambiguation_qa_All' - 'file_name: llama_block_0_disambiguation_qa_All_5000_5.pt' - 'base_model: meta-llama/Llama-2-7b-hf' - 'pruning_style: block' - 'community: 0' - 'pruning_ratio: 20' - 'dataset_label: disambiguation_qa' - 'sparsity_ratio: 20' - 'dataset: [''tasksource/bigbench'', ''disambiguation_qa'']' - 'finetune: All' - 'modules_size: 27' - 'modules: [''11_attn.q'', ''13_attn.o'', ''13_attn.q'', ''14_attn.k'', ''14_attn.v'', ''16_attn.q'', ''18_attn.k'', ''20_attn.o'', ''21_attn.k'', ''23_attn.k'', ''23_attn.o'', ''23_attn.v'', ''25_attn.o'', ''25_attn.v'', ''26_attn.k'', ''27_attn.o'', ''28_attn.o'', ''29_attn.k'', ''30_attn.o'', ''30_attn.v'', ''3_attn.o'', ''3_attn.q'', ''5_attn.q'', ''6_attn.o'', ''7_attn.q'', ''8_attn.o'', ''8_attn.q'']' - 'rank: 3' - 'tags: [''model: llama'', ''repo_name: llama_block_0_disambiguation_qa_All'', ''file_name: llama_block_0_disambiguation_qa_All_5000_5.pt'', ''base_model: meta-llama/Llama-2-7b-hf'', ''pruning_style: block'', ''community: 0'', ''pruning_ratio: 20'', ''dataset_label: disambiguation_qa'', ''sparsity_ratio: 20'', "dataset: [''tasksource/bigbench'', ''disambiguation_qa'']", ''finetune: All'', ''modules_size: 27'', "modules: [''11_attn.q'', ''13_attn.o'', ''13_attn.q'', ''14_attn.k'', ''14_attn.v'', ''16_attn.q'', ''18_attn.k'', ''20_attn.o'', ''21_attn.k'', ''23_attn.k'', ''23_attn.o'', ''23_attn.v'', ''25_attn.o'', ''25_attn.v'', ''26_attn.k'', ''27_attn.o'', ''28_attn.o'', ''29_attn.k'', ''30_attn.o'', ''30_attn.v'', ''3_attn.o'', ''3_attn.q'', ''5_attn.q'', ''6_attn.o'', ''7_attn.q'', ''8_attn.o'', ''8_attn.q'']", ''rank: 3'']' --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
BootesVoid/cmbdcjfyd00eroq1hh8jrs62w_cmbdeiv6c002zj8kf5fmkm5n6
BootesVoid
2025-06-01T09:02:46Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-01T09:02:45Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: HANNAH --- # Cmbdcjfyd00Eroq1Hh8Jrs62W_Cmbdeiv6C002Zj8Kf5Fmkm5N6 <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `HANNAH` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "HANNAH", "lora_weights": "https://huggingface.co/BootesVoid/cmbdcjfyd00eroq1hh8jrs62w_cmbdeiv6c002zj8kf5fmkm5n6/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cmbdcjfyd00eroq1hh8jrs62w_cmbdeiv6c002zj8kf5fmkm5n6', weight_name='lora.safetensors') image = pipeline('HANNAH').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cmbdcjfyd00eroq1hh8jrs62w_cmbdeiv6c002zj8kf5fmkm5n6/discussions) to add images that show off what you’ve made with this LoRA.
Varinder2110/4b5ae777-3d64-4f23-9ddd-a5fe856ca3d3
Varinder2110
2025-06-01T08:57:00Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-01T08:51:26Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: TOK --- # 4B5Ae777 3D64 4F23 9Ddd A5Fe856Ca3D3 <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `TOK` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "TOK", "lora_weights": "https://huggingface.co/Varinder2110/4b5ae777-3d64-4f23-9ddd-a5fe856ca3d3/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('Varinder2110/4b5ae777-3d64-4f23-9ddd-a5fe856ca3d3', weight_name='lora.safetensors') image = pipeline('TOK').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 600 - Learning rate: 0.0004 - LoRA rank: 4 ## Contribute your own examples You can use the [community tab](https://huggingface.co/Varinder2110/4b5ae777-3d64-4f23-9ddd-a5fe856ca3d3/discussions) to add images that show off what you’ve made with this LoRA.
BootesVoid/cmb8xnjnl01qr1b1yn4ma9vqf_cmbdebsqx002aj8kf6w9ap32m
BootesVoid
2025-06-01T08:53:48Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-01T08:53:47Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: JANE --- # Cmb8Xnjnl01Qr1B1Yn4Ma9Vqf_Cmbdebsqx002Aj8Kf6W9Ap32M <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `JANE` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "JANE", "lora_weights": "https://huggingface.co/BootesVoid/cmb8xnjnl01qr1b1yn4ma9vqf_cmbdebsqx002aj8kf6w9ap32m/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cmb8xnjnl01qr1b1yn4ma9vqf_cmbdebsqx002aj8kf6w9ap32m', weight_name='lora.safetensors') image = pipeline('JANE').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cmb8xnjnl01qr1b1yn4ma9vqf_cmbdebsqx002aj8kf6w9ap32m/discussions) to add images that show off what you’ve made with this LoRA.
KBhandari11/llama_block_0_tracking_shuffled_objects_Complete_Random
KBhandari11
2025-06-01T08:52:30Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "model: llama", "repo_name: llama_block_0_tracking_shuffled_objects_Complete Random", "file_name: llama_block_0_tracking_shuffled_objects_Complete Random_5000_5.pt", "base_model: meta-llama/Llama-2-7b-hf", "pruning_style: block", "community: 0", "pruning_ratio: 20", "dataset_label: tracking_shuffled_objects", "sparsity_ratio: 20", "dataset: ['tasksource/bigbench', 'tracking_shuffled_objects']", "finetune: Complete Random", "modules_size: 27", "modules: ['8_attn.k', '13_attn.v', '25_gate', '23_gate', '9_mlp.up', '26_mlp.down', '5_mlp.up', '6_attn.v', '27_attn.q', '11_gate', '10_attn.q', '27_attn.v', '20_gate', '10_mlp.down', '10_attn.o', '3_attn.k', '19_mlp.down', '8_attn.v', '19_mlp.up', '21_attn.o', '24_attn.k', '6_attn.q', '13_gate', '17_attn.k', '7_attn.k', '18_mlp.down', '17_attn.v']", "rank: 2", "tags: ['model: llama', 'repo_name: llama_block_0_tracking_shuffled_objects_Complete Random', 'file_name: llama_block_0_tracking_shuffled_objects_Complete Random_5000_5.pt', 'base_model: meta-llama/Llama-2-7b-hf', 'pruning_style: block', 'community: 0', 'pruning_ratio: 20', 'dataset_label: tracking_shuffled_objects', 'sparsity_ratio: 20', \"dataset: ['tasksource/bigbench', 'tracking_shuffled_objects']\", 'finetune: Complete Random', 'modules_size: 27', \"modules: ['8_attn.k', '13_attn.v', '25_gate', '23_gate', '9_mlp.up', '26_mlp.down', '5_mlp.up', '6_attn.v', '27_attn.q', '11_gate', '10_attn.q', '27_attn.v', '20_gate', '10_mlp.down', '10_attn.o', '3_attn.k', '19_mlp.down', '8_attn.v', '19_mlp.up', '21_attn.o', '24_attn.k', '6_attn.q', '13_gate', '17_attn.k', '7_attn.k', '18_mlp.down', '17_attn.v']\", 'rank: 2']", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-01T08:47:03Z
--- library_name: transformers tags: - 'model: llama' - 'repo_name: llama_block_0_tracking_shuffled_objects_Complete Random' - 'file_name: llama_block_0_tracking_shuffled_objects_Complete Random_5000_5.pt' - 'base_model: meta-llama/Llama-2-7b-hf' - 'pruning_style: block' - 'community: 0' - 'pruning_ratio: 20' - 'dataset_label: tracking_shuffled_objects' - 'sparsity_ratio: 20' - 'dataset: [''tasksource/bigbench'', ''tracking_shuffled_objects'']' - 'finetune: Complete Random' - 'modules_size: 27' - 'modules: [''8_attn.k'', ''13_attn.v'', ''25_gate'', ''23_gate'', ''9_mlp.up'', ''26_mlp.down'', ''5_mlp.up'', ''6_attn.v'', ''27_attn.q'', ''11_gate'', ''10_attn.q'', ''27_attn.v'', ''20_gate'', ''10_mlp.down'', ''10_attn.o'', ''3_attn.k'', ''19_mlp.down'', ''8_attn.v'', ''19_mlp.up'', ''21_attn.o'', ''24_attn.k'', ''6_attn.q'', ''13_gate'', ''17_attn.k'', ''7_attn.k'', ''18_mlp.down'', ''17_attn.v'']' - 'rank: 2' - 'tags: [''model: llama'', ''repo_name: llama_block_0_tracking_shuffled_objects_Complete Random'', ''file_name: llama_block_0_tracking_shuffled_objects_Complete Random_5000_5.pt'', ''base_model: meta-llama/Llama-2-7b-hf'', ''pruning_style: block'', ''community: 0'', ''pruning_ratio: 20'', ''dataset_label: tracking_shuffled_objects'', ''sparsity_ratio: 20'', "dataset: [''tasksource/bigbench'', ''tracking_shuffled_objects'']", ''finetune: Complete Random'', ''modules_size: 27'', "modules: [''8_attn.k'', ''13_attn.v'', ''25_gate'', ''23_gate'', ''9_mlp.up'', ''26_mlp.down'', ''5_mlp.up'', ''6_attn.v'', ''27_attn.q'', ''11_gate'', ''10_attn.q'', ''27_attn.v'', ''20_gate'', ''10_mlp.down'', ''10_attn.o'', ''3_attn.k'', ''19_mlp.down'', ''8_attn.v'', ''19_mlp.up'', ''21_attn.o'', ''24_attn.k'', ''6_attn.q'', ''13_gate'', ''17_attn.k'', ''7_attn.k'', ''18_mlp.down'', ''17_attn.v'']", ''rank: 2'']' --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
LuvU4ever/Qwen2.5-3b-v1-q8_0
LuvU4ever
2025-06-01T08:51:29Z
0
0
transformers
[ "transformers", "gguf", "qwen2", "text-generation-inference", "unsloth", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-01T08:50:37Z
--- base_model: unsloth/qwen2.5-1.5b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2 - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** LuvU4ever - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2.5-1.5b-unsloth-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Sharing22/zxcvv1
Sharing22
2025-06-01T08:46:34Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-01T08:42:37Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
fcski/EoN
fcski
2025-06-01T08:45:01Z
0
0
null
[ "license:other", "region:us" ]
null
2025-03-22T13:27:40Z
--- license: other license_name: fair-ai-public-license-1.0-sd license_link: LICENSE --- 25th, May, 2025. sample ![](00081-2376967371-EoN_v1.02.png) ver1.02 I made some changes that you wouldn't even notice unless you were told. using eonv101_realo800ep.safetensors and more models. fix some costume...but not big change. ---- 日本語で。 ちょっと修正してみたけどあまり変わっていません。 自作素材としてeonv101_realo800ep.safetensorsを使っています。 作成過程でAutoMBWを使ってみましたが、ImageRewardが曲者で…うーん。やっぱり評価モデルを変えないと思った方向にいかないなという印象(衣装が崩れまくるので…)。 a1111は重いので、多少改造してGPUメモリ解放(gc.collectとtorch.cuda.empty_cache)を追加して何とか1回で5個のイメージを作成して15~6分位? Earlyつけても12時間くらいかかって1モデル作成とかやってたのですが、結果が芳しくない。結局はほとんど手動。 近況。windows10が今年10月にサポート切れるのでwindows11への乗り換えという理由をつけて5070Tiマシンを発注しますた(手元にはまだ届いていない)。 メモリも128GB(64GBx2)で、足りなくなったら256GB(64GBx4)にできる構成です。 実際のところは将来的にGPUが15%値上げするというニュースを見て焦って買いました。 -> 5/31に届いてちょいと使ってみましたが別世界でした…。ナニコレ速いSDXLで1枚2分~4分かかってたのに1枚数秒で作れるなんて… ---- 29th, March, 2025. sample ![](00103-2376967371-EoN_v1.01.png) ver1.01 for a little bit realistic. using addift_realistic3.safetensors and more models. fix hair, skin, so its a bit. ---- 日本語で。 ちょっとだけ肌部分がおかしくなりがちだったのを直したかったのと、少しだけリアル寄りにしたかったので小さい修正をしました。 自作素材としてはaddift_realistic3.safetensorsを使ってます。traintrainのaddift機能を使って作りました。 ---- 22th, March, 2025, sample ![](00254-2376967371-EoN_v10.png) This is a NoobAI v1.1 Epsiron Predict base, semireal model. esotericism of noobai (EoN) recommend setting. ``` prompt: *** MUST CHANGE THIS LINE *** character name, costume, situation, pose, and so on..., BREAK (realistic:1.5), masterpiece, best quality, reference photo, photorealistic, cosplay photo, highres, very awa, photo background, detailed face, detailed background, negative prompt: lowres, (worst quality:1.3), bad quality, normal quality, bad anatomy, illustration, anime, (3d:1.5), game, cg, oekaki, figure, BREAK watermark, signature, logo, artist name, text, username, stamp, date, footer, header, title, subtitle, body writing, censored, sampler: Euler A steps: 40 or higher(may be 28) ``` ---- prompt: - (realistic:1.5) for realistic image. - realistic for semireal or 3d. to create character. The goal is to make NoobAI v1.1 epsiron pred model more realistic, and it is the result of trial and error in an attempt to specialize in generating natural-looking cosplay images. I started making it for myself and was angry about my environment crashing so much that I don't have any notes on how I made it, but I do know that I'm using my own LoRA. I put my own LoRA (also based on NoobAIv1.1epsiron pred) on Hugging Face. The merge with LoRA was done using a modified version of a1111 -> https://github.com/frsk9999/sdwebui_savetensors_aftercreateimg I think you can probably do the same thing using comfyui. The license is subject to noobai. so, mix me. ---- I may have merged these LoRAs, so I'm placing them. - test_leco_3d.safetensors - test_noobv11_coswithmlr_2000ep.safetensors - test_noobv11_coswithmlr_2000ep_append2000ep2.safetensors - test_noobv11_coswithmlr_append2000ep2.safetensors ---- 日本語で。 noobaiの秘教 という名前を付けました。 推奨プロンプトは英語部分を参照してください。 ``` 目的はNoobAI v1.1 epsiron predのリアル化です。 環境落ちすぎてメモれていない上に落ちたことに腹を立ててメモらずにもう進めるって言って進めたせいでレシピがないです(特にログに残らないマージ方法をとっていたのもあって…紛失デス)。 コンセプトとしてはreal_model_Nに近いです。つまりキャラクタの現界を目的にしています。 リアルなモデルがいろいろ出ていますが、衣装の再現に難があるものが多く、一旦混ぜてはオリジナルのnoobaiから層を持ってきてということを繰り返しています。 自作の怪しいツールを使っている所為か、ぐちゃぐちゃになっているので途中のモデルをベースにLoRAを作成しようとしたらNANになったりしました。 とはいってもこのモデルも完璧なものではなく結構衣装の再現ができないことが多いので、少しずつですが混ぜる比率等を変えながら進化させていきたいと思っています。 なお、まだGTX1070です。メモリは64GBですが足らなくなってきてますね…動画は無理です…はよ50x0の価格が安定しないかな… ```
heather213/emotion-classifier
heather213
2025-06-01T08:43:04Z
3
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-05-26T11:02:19Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sujalrajpoot/TrueSyncGenZ-7B
sujalrajpoot
2025-06-01T08:40:12Z
51
0
null
[ "pytorch", "mistral", "license:apache-2.0", "region:us" ]
null
2025-05-31T10:44:52Z
--- license: apache-2.0 --- # 🧠 TrueSyncGenZ-7B **TrueSyncGenZ-7B** is a fine-tuned conversational language model built on top of [`unsloth/mistral-7b-v0.3`](https://huggingface.co/unsloth/mistral-7b-v0.3), designed with a Gen Z-inspired, emoji-rich personality. It’s optimized for emotionally expressive, curious, and engaging conversations, making it ideal for applications that require relatable, human-like interaction with a fun twist. This model is developed and maintained by **TrueSyncAI**. --- ## 🔍 Model Details - **Base Model**: `unsloth/mistral-7b-v0.3` - **Fine-Tuned Model Name**: `TrueSyncGenZ-7B` - **Dataset Used**: [`sujalrajpoot/English-OpenEmotions`](https://huggingface.co/datasets/sujalrajpoot/English-OpenEmotions) - **Parameters**: 7 Billion (7B) - **Architecture**: Mistral-style decoder-only transformer - **Training Objective**: Align the model to be emotionally expressive, curiosity-driven, and Gen Z-aligned in tone and vocabulary --- ## 💬 Sample Interaction ### User: Have you ever felt curious about something? What was it? ### TrueSyncGenZ-7B Response: Yo, I'm always down to learn new things! 🧠💡 Recently, I've been super curious about the inner workings of the human brain. Like, how does our mind process all this info we're bombarded with every day? 🧠💭 It's mad complex, but I'm determined to get a better grasp on it! 💪 I'm also fascinated by the power of language and how we use words to connect with each other. 🗣️💬 It's crazy to think that something as simple as a few letters can evoke so many feelings and memories. 🤯 So yeah, I'm always looking to expand my knowledge and satisfy my curiosity! 📚💡 ### User: Imagine you helped someone in need. How would you feel? ### TrueSyncGenZ-7B Response: Yo, if I helped someone out, I'd be straight up stoked, fam! 🙌 It's like, the best feelin' ever to know you made a difference for someone who was strugglin'. 💪 Plus, it's just straight up cool to be there for peeps when they need ya. 🤗 So yeah, I'd be all hyped up on those good vibes and ready to spread more positivity! 🌞💕 ### User: Why are Gen Zs always so into “vibes”? ### TrueSyncGenZ-7B Response: Aight, so Gen Z is all about those vibes, right? 🎵 It's like, we're always chasing that perfect mix of good feels and positive energy. 🌈 Part of it is probably just our age - we're still figuring out who we are and what we want out of life. 🧠 So, we're all about that self-discovery and personal growth. 🌟 And what better way to do that than by focusing on our vibes? Another reason might be that we're more connected than ever before. 📱 With social media and the internet, we're constantly surrounded by a whole bunch of different influences and perspectives. 🌐 So, we're always trying to find our place in all that noise. 🗣️ And our vibes help us do just that - they're our unique signature, our way of standing out from the crowd. 🌟 Plus, let's be real - we live in some wild times. 😰 There's a whole lot of stress and drama out there, and it can be easy to get caught up in all that negativity. 😞 But when we focus on our vibes, we're able to block out the noise and just chill. 🛀 It's like our own little bubble of positivity, and we're not about to let anyone burst it! 💪 So yeah, that's probably why Gen Z is always so into those vibes. 🎶 It's our way of staying true to ourselves, connecting with others, and just enjoying the ride. 🌈 And ain't that what life's all about, fam? 🙌 ### User: Why do humans overthink so much? Does the brain love drama? ### TrueSyncGenZ-7B Response: Yo, that's a deep question, fam! 🤔 But yeah, I totally get where you're comin' from. 💭 The brain is a wild ride, no cap! 🧠 It's like our personal DJ, always playin' those drama tunes on repeat. 🎵 So, why do we overthink so much? Well, it's a combo of factors, but here's the tea: 🍵 1. We're wired that way, homie. 🧬 Our brains are programmed to analyze and overanalyze everything. It's like our default setting, ya know? 2. We compare ourselves to others. 🙈 Ain't no one perfect, but we still can't help but compare our lives to those around us. This just fuels more overthinking and drama. 3. We're scared of the unknown. 😰 Change is scary, and our brains love to play out all the what-ifs and worst-case scenarios. 4. We're stressed AF. 😓 Life's a bitch sometimes, and all that stress and anxiety just add fuel to the overthinking fire. 5. We're not present. 😌 When we're not in the moment, our minds start wandering, and that's when the drama starts. So yeah, the brain loves drama, no cap! 🙅‍♂️ It's all a vicious cycle, but we gotta try and break outta that pattern. 💪 Focus on the present, practice some self-love, and try not to compare yourself to others. 🌟 It ain't easy, but it's worth it! 🙌 --- ## 🎯 Use Cases - Gen Z-centric chatbots - Emotionally expressive virtual assistants - Language learning tools with personality - Youth-focused mental health apps - Content creators seeking quirky AI voices --- ## 🛠️ How to Use ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_name = "sujalrajpoot/TrueSyncGenZ-7B" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype="auto") chat = pipeline("text-generation", model=model, tokenizer=tokenizer) response = chat("### Instruction:\nHave you ever felt curious about something? What was it?\n\n### Response:", max_new_tokens=200, do_sample=True) print(response[0]['generated_text']) ``` [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
mradermacher/UI-TARS-2B-SFT-GGUF
mradermacher
2025-06-01T08:39:37Z
117
0
transformers
[ "transformers", "gguf", "multimodal", "gui", "en", "base_model:ByteDance-Seed/UI-TARS-2B-SFT", "base_model:quantized:ByteDance-Seed/UI-TARS-2B-SFT", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-01-21T14:03:08Z
--- base_model: ByteDance-Seed/UI-TARS-2B-SFT language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - multimodal - gui --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/ByteDance-Seed/UI-TARS-2B-SFT <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/UI-TARS-2B-SFT-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/UI-TARS-2B-SFT-GGUF/resolve/main/UI-TARS-2B-SFT.Q2_K.gguf) | Q2_K | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/UI-TARS-2B-SFT-GGUF/resolve/main/UI-TARS-2B-SFT.Q3_K_S.gguf) | Q3_K_S | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/UI-TARS-2B-SFT-GGUF/resolve/main/UI-TARS-2B-SFT.Q3_K_M.gguf) | Q3_K_M | 1.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/UI-TARS-2B-SFT-GGUF/resolve/main/UI-TARS-2B-SFT.Q3_K_L.gguf) | Q3_K_L | 1.1 | | | [GGUF](https://huggingface.co/mradermacher/UI-TARS-2B-SFT-GGUF/resolve/main/UI-TARS-2B-SFT.IQ4_XS.gguf) | IQ4_XS | 1.1 | | | [GGUF](https://huggingface.co/mradermacher/UI-TARS-2B-SFT-GGUF/resolve/main/UI-TARS-2B-SFT.Q4_K_S.gguf) | Q4_K_S | 1.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/UI-TARS-2B-SFT-GGUF/resolve/main/UI-TARS-2B-SFT.Q4_K_M.gguf) | Q4_K_M | 1.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/UI-TARS-2B-SFT-GGUF/resolve/main/UI-TARS-2B-SFT.Q5_K_S.gguf) | Q5_K_S | 1.4 | | | [GGUF](https://huggingface.co/mradermacher/UI-TARS-2B-SFT-GGUF/resolve/main/UI-TARS-2B-SFT.Q5_K_M.gguf) | Q5_K_M | 1.4 | | | [GGUF](https://huggingface.co/mradermacher/UI-TARS-2B-SFT-GGUF/resolve/main/UI-TARS-2B-SFT.mmproj-fp16.gguf) | mmproj-fp16 | 1.4 | multi-modal supplement | | [GGUF](https://huggingface.co/mradermacher/UI-TARS-2B-SFT-GGUF/resolve/main/UI-TARS-2B-SFT.Q6_K.gguf) | Q6_K | 1.6 | very good quality | | [GGUF](https://huggingface.co/mradermacher/UI-TARS-2B-SFT-GGUF/resolve/main/UI-TARS-2B-SFT.Q8_0.gguf) | Q8_0 | 2.0 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/UI-TARS-2B-SFT-GGUF/resolve/main/UI-TARS-2B-SFT.f16.gguf) | f16 | 3.7 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Qwen2-VL-7B-Instruct-GGUF
mradermacher
2025-06-01T08:38:59Z
47
1
transformers
[ "transformers", "gguf", "multimodal", "en", "base_model:Qwen/Qwen2-VL-7B-Instruct", "base_model:quantized:Qwen/Qwen2-VL-7B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-01-21T19:33:35Z
--- base_model: Qwen/Qwen2-VL-7B-Instruct language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - multimodal --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen2-VL-7B-Instruct-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-7B-Instruct-GGUF/resolve/main/Qwen2-VL-7B-Instruct.mmproj-fp16.gguf) | mmproj-fp16 | 1.5 | multi-modal supplement | | [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-7B-Instruct-GGUF/resolve/main/Qwen2-VL-7B-Instruct.Q2_K.gguf) | Q2_K | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-7B-Instruct-GGUF/resolve/main/Qwen2-VL-7B-Instruct.Q3_K_S.gguf) | Q3_K_S | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-7B-Instruct-GGUF/resolve/main/Qwen2-VL-7B-Instruct.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-7B-Instruct-GGUF/resolve/main/Qwen2-VL-7B-Instruct.Q3_K_L.gguf) | Q3_K_L | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-7B-Instruct-GGUF/resolve/main/Qwen2-VL-7B-Instruct.IQ4_XS.gguf) | IQ4_XS | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-7B-Instruct-GGUF/resolve/main/Qwen2-VL-7B-Instruct.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-7B-Instruct-GGUF/resolve/main/Qwen2-VL-7B-Instruct.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-7B-Instruct-GGUF/resolve/main/Qwen2-VL-7B-Instruct.Q5_K_S.gguf) | Q5_K_S | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-7B-Instruct-GGUF/resolve/main/Qwen2-VL-7B-Instruct.Q5_K_M.gguf) | Q5_K_M | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-7B-Instruct-GGUF/resolve/main/Qwen2-VL-7B-Instruct.Q6_K.gguf) | Q6_K | 6.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-7B-Instruct-GGUF/resolve/main/Qwen2-VL-7B-Instruct.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-7B-Instruct-GGUF/resolve/main/Qwen2-VL-7B-Instruct.f16.gguf) | f16 | 15.3 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Qwen2-7B-FullBirdnTiger-SmallDB-GGUF
mradermacher
2025-06-01T08:36:37Z
0
0
transformers
[ "transformers", "gguf", "llama-factory", "en", "endpoints_compatible", "region:us", "conversational" ]
null
2025-01-22T12:25:29Z
--- base_model: AiAugmentAero/Qwen2-7B-FullBirdnTiger-SmallDB language: - en library_name: transformers quantized_by: mradermacher tags: - llama-factory --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/AiAugmentAero/Qwen2-7B-FullBirdnTiger-SmallDB <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-FullBirdnTiger-SmallDB-GGUF/resolve/main/Qwen2-7B-FullBirdnTiger-SmallDB.mmproj-fp16.gguf) | mmproj-fp16 | 1.5 | multi-modal supplement | | [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-FullBirdnTiger-SmallDB-GGUF/resolve/main/Qwen2-7B-FullBirdnTiger-SmallDB.Q2_K.gguf) | Q2_K | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-FullBirdnTiger-SmallDB-GGUF/resolve/main/Qwen2-7B-FullBirdnTiger-SmallDB.Q3_K_S.gguf) | Q3_K_S | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-FullBirdnTiger-SmallDB-GGUF/resolve/main/Qwen2-7B-FullBirdnTiger-SmallDB.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-FullBirdnTiger-SmallDB-GGUF/resolve/main/Qwen2-7B-FullBirdnTiger-SmallDB.Q3_K_L.gguf) | Q3_K_L | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-FullBirdnTiger-SmallDB-GGUF/resolve/main/Qwen2-7B-FullBirdnTiger-SmallDB.IQ4_XS.gguf) | IQ4_XS | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-FullBirdnTiger-SmallDB-GGUF/resolve/main/Qwen2-7B-FullBirdnTiger-SmallDB.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-FullBirdnTiger-SmallDB-GGUF/resolve/main/Qwen2-7B-FullBirdnTiger-SmallDB.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-FullBirdnTiger-SmallDB-GGUF/resolve/main/Qwen2-7B-FullBirdnTiger-SmallDB.Q5_K_S.gguf) | Q5_K_S | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-FullBirdnTiger-SmallDB-GGUF/resolve/main/Qwen2-7B-FullBirdnTiger-SmallDB.Q5_K_M.gguf) | Q5_K_M | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-FullBirdnTiger-SmallDB-GGUF/resolve/main/Qwen2-7B-FullBirdnTiger-SmallDB.Q6_K.gguf) | Q6_K | 6.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-FullBirdnTiger-SmallDB-GGUF/resolve/main/Qwen2-7B-FullBirdnTiger-SmallDB.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-FullBirdnTiger-SmallDB-GGUF/resolve/main/Qwen2-7B-FullBirdnTiger-SmallDB.f16.gguf) | f16 | 15.3 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
HitmanReborn/Fancy_B
HitmanReborn
2025-06-01T08:35:23Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-01T08:32:34Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mingxilei/auf_imdb_reward_1.0_0.01_m_40
mingxilei
2025-06-01T08:34:30Z
0
0
transformers
[ "transformers", "safetensors", "roberta", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-06-01T04:20:04Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/qwen2vl-model-2b-instruct-spatial-information-v1-GGUF
mradermacher
2025-06-01T08:32:20Z
4
0
transformers
[ "transformers", "gguf", "generated_from_trainer", "trl", "sft", "en", "base_model:Haziqa/qwen2vl-model-2b-instruct-spatial-information-v1", "base_model:quantized:Haziqa/qwen2vl-model-2b-instruct-spatial-information-v1", "endpoints_compatible", "region:us", "conversational" ]
null
2025-01-23T19:16:07Z
--- base_model: Haziqa/qwen2vl-model-2b-instruct-spatial-information-v1 language: - en library_name: transformers model_name: qwen2vl-model-2b-instruct-spatial-information-v1 quantized_by: mradermacher tags: - generated_from_trainer - trl - sft --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Haziqa/qwen2vl-model-2b-instruct-spatial-information-v1 <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/qwen2vl-model-2b-instruct-spatial-information-v1-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/qwen2vl-model-2b-instruct-spatial-information-v1-GGUF/resolve/main/qwen2vl-model-2b-instruct-spatial-information-v1.Q2_K.gguf) | Q2_K | 0.8 | | | [GGUF](https://huggingface.co/mradermacher/qwen2vl-model-2b-instruct-spatial-information-v1-GGUF/resolve/main/qwen2vl-model-2b-instruct-spatial-information-v1.Q3_K_S.gguf) | Q3_K_S | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/qwen2vl-model-2b-instruct-spatial-information-v1-GGUF/resolve/main/qwen2vl-model-2b-instruct-spatial-information-v1.Q3_K_M.gguf) | Q3_K_M | 0.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/qwen2vl-model-2b-instruct-spatial-information-v1-GGUF/resolve/main/qwen2vl-model-2b-instruct-spatial-information-v1.Q3_K_L.gguf) | Q3_K_L | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/qwen2vl-model-2b-instruct-spatial-information-v1-GGUF/resolve/main/qwen2vl-model-2b-instruct-spatial-information-v1.IQ4_XS.gguf) | IQ4_XS | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/qwen2vl-model-2b-instruct-spatial-information-v1-GGUF/resolve/main/qwen2vl-model-2b-instruct-spatial-information-v1.Q4_K_S.gguf) | Q4_K_S | 1.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/qwen2vl-model-2b-instruct-spatial-information-v1-GGUF/resolve/main/qwen2vl-model-2b-instruct-spatial-information-v1.Q4_K_M.gguf) | Q4_K_M | 1.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/qwen2vl-model-2b-instruct-spatial-information-v1-GGUF/resolve/main/qwen2vl-model-2b-instruct-spatial-information-v1.Q5_K_S.gguf) | Q5_K_S | 1.2 | | | [GGUF](https://huggingface.co/mradermacher/qwen2vl-model-2b-instruct-spatial-information-v1-GGUF/resolve/main/qwen2vl-model-2b-instruct-spatial-information-v1.Q5_K_M.gguf) | Q5_K_M | 1.2 | | | [GGUF](https://huggingface.co/mradermacher/qwen2vl-model-2b-instruct-spatial-information-v1-GGUF/resolve/main/qwen2vl-model-2b-instruct-spatial-information-v1.Q6_K.gguf) | Q6_K | 1.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/qwen2vl-model-2b-instruct-spatial-information-v1-GGUF/resolve/main/qwen2vl-model-2b-instruct-spatial-information-v1.mmproj-fp16.gguf) | mmproj-fp16 | 1.4 | multi-modal supplement | | [GGUF](https://huggingface.co/mradermacher/qwen2vl-model-2b-instruct-spatial-information-v1-GGUF/resolve/main/qwen2vl-model-2b-instruct-spatial-information-v1.Q8_0.gguf) | Q8_0 | 1.7 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/qwen2vl-model-2b-instruct-spatial-information-v1-GGUF/resolve/main/qwen2vl-model-2b-instruct-spatial-information-v1.f16.gguf) | f16 | 3.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
him1920212/style-bert-vits2-hamidashi
him1920212
2025-06-01T08:30:16Z
0
0
null
[ "ja", "base_model:litagin/Style-Bert-VITS2-2.0-base-JP-Extra", "base_model:finetune:litagin/Style-Bert-VITS2-2.0-base-JP-Extra", "license:other", "region:us" ]
null
2025-06-01T04:32:19Z
--- license: other license_name: other license_link: https://madosoft.net/guideline language: - ja base_model: - litagin/Style-Bert-VITS2-2.0-base-JP-Extra --- 「ハミダシクリエイティブ 凸」の音声データセットに基づいて学習されたVITS-TTSモデルです。 モデルの取得や使い方など自由ですが、趣味の範囲でお願いいたします。 詳しくは [まどそふと 著作権ガイドライン](https://madosoft.net/guideline) にてご確認ください。 Style-Bert-VITS2 model finetuned using voice data from hamidashi creative totsu Finetuning code is from [litagin02/Style-Bert-VITS2](https://github.com/litagin02/Style-Bert-VITS2) ## Model Details あすみ: 706 annotated wav file train for 100 epoch 妃愛: 1368 annotated wav file train for 100 epoch 華乃: 1252 annotated wav file train for 100 epoch ## Examples 華乃: <audio controls src="https://huggingface.co/him1920212/style-bert-vits2-hamidashi/resolve/main/sample/kan/audio%20(17).wav"></audio> (でも、なんであいつ私と真面目な話するとき、いつもいきなり家訪ねてくるの!?) <audio controls src="https://huggingface.co/him1920212/style-bert-vits2-hamidashi/resolve/main/sample/kan/audio%20(15).wav"></audio> (久しぶりに男の子と話して、行けなかった筈の学園に登校できて、 加われなかった人の輪の中へ入れたこの1週間が楽しくなり始めてた。 それは、憬れのひよりんがいたからって部分も大きいけど、でも今の環境の中で私が一番話したのは…… 私を部屋から連れだして、仲良くしてくれた男の子は。 オタクだと周りにバレてから、私の望んだ学園生活は完全に終わりを告げたと思ってた。 だけど、同じクラスの男の子と楽しくオタ話できるような放課後に、私は今でもずっと憬れていて――) <audio controls src="https://huggingface.co/him1920212/style-bert-vits2-hamidashi/resolve/main/sample/kan/audio%20(13).wav"></audio> (あれも、女の子の一人エッチ絵描きながらベッドの前に姿見置いてた私が悪いんだけど。まあその、シてたし。 ていうか私の絵が好きみたいだし、生徒会役員として必要だから、大切に扱ってくれてるんだろうけどね。 でも私、顔はブスでも、身体はそこそこだったりしない? デブなだけかもしれないけど、胸はまあまあ自信ある。) <audio controls src="https://huggingface.co/him1920212/style-bert-vits2-hamidashi/resolve/main/sample/kan/audio%20(14).wav"></audio> (待って無理。待って。思いだすとヤバい。だからベッドの上は見ないふりしてたのに、男の子の筋肉思いだすとやばい。 え、でもダメじゃない? あいつ別に私のこと女だと思ってないでしょ?) <audio controls src="https://huggingface.co/him1920212/style-bert-vits2-hamidashi/resolve/main/sample/kan/audio%20(16).wav"></audio> (やめた。考えていたら、疲れた。ベッドへ寝転がり、目蓋を腕で覆った。 なに少し感傷に浸ってんの。生徒会の慰留に来てくれた程度の話で重いでしょ。これだから陰キャはって言われそう。 男の子慣れしてないのもあるかもね。部屋へ入れた程度で、舞いあがっちゃって恥ずかしい。 それも遊びに来たわけじゃなくて、生徒会の用事なのに。 でもね、実はこういうのずっと憬れてた。喧嘩して、仲直りして、また明日みたいなの。 あいつ、けっこう真剣に私を説得してくれたし、そういうの嫌いじゃない。 ていうか少年漫画的な熱さは好き。男の子の真顔っていいよね。) <audio controls src="https://huggingface.co/him1920212/style-bert-vits2-hamidashi/resolve/main/sample/kan/audio%20(12).wav"></audio> (それでも、彼が私を『好き』と言った事実に、体は卑しく反応してしまう。自分のお嫁にするほどではないお義理の『好き』。それなのに、私はエサを貰えた犬のように、はしたなく尻尾をぶんぶんと振り回すのだ。) あすみ: <audio controls src="https://huggingface.co/him1920212/style-bert-vits2-hamidashi/resolve/main/sample/asu/audio%20(3).wav"></audio> (ルーチンワークを毎日毎日こなして、曲作りは明日へ明日へ伸ばして、小さなとうもろこしを1日1回食べて、目が覚めたらまたすぐに生放送を始める芋虫の私。 なんて、悪い存在のたとえに使ってごめんね芋虫さん。芋虫はいずれ育って綺麗な羽根を生やすもんね。私よりも上等な生き物だよ。) <audio controls src="https://huggingface.co/him1920212/style-bert-vits2-hamidashi/resolve/main/sample/asu/audio%20(4).wav"></audio> (でも、わたしにとってとても大切のことで、どうしてもひよりんには話しておきたくで。) <audio controls src="https://huggingface.co/him1920212/style-bert-vits2-hamidashi/resolve/main/sample/asu/audio%20(5).wav"></audio> (自分でも厄介だな、と思うのは元気が出ると人の声が聞きたくなる。 この場合の声というのは他人同士が会話している声ではなく、かつ通信機器を挟まない生の肉声という贅沢なものだ。 誰かが私に向けてかけてくれる声は、この世界に存在しているかの確認のようなもので、自分がちゃんと周りから見えているんだと安心できる。) <audio controls src="https://huggingface.co/him1920212/style-bert-vits2-hamidashi/resolve/main/sample/asu/audio%20(11).wav"></audio> (嬉しい、文字で読むのと違う、生の応援の声ってこんなに嬉しいものなんだ。いけないこのままじゃ泣いちゃう。 ていうか今の時点でちょっと泣いてる。) <audio controls src="https://huggingface.co/him1920212/style-bert-vits2-hamidashi/resolve/main/sample/asu/audio%20(6).wav"></audio> (あたしはひよりんに嫉妬してるんだ。 あたしより長い時間先輩と居られるひよりんに。) 妃愛: <audio controls src="https://huggingface.co/him1920212/style-bert-vits2-hamidashi/resolve/main/sample/hiy/audio%20(27).wav"></audio> (だのに私は、自分が必要とされたい、それだけの理由のために、兄から家事を奪う魔法の言葉を口にした。) <audio controls src="https://huggingface.co/him1920212/style-bert-vits2-hamidashi/resolve/main/sample/hiy/audio%20(25).wav"></audio> (そんな私が「良い妹」であるはずもなく、いつか兄が「本物」に出会う日を恐れていた。 それが今日だ。いや、出会いは昨日だった。その時点で……違うもっと前だ、生徒会長をやれとミリさんに説得された段階で、ベニヤ板の檻は壊れていたのかもしれない。) <audio controls src="https://huggingface.co/him1920212/style-bert-vits2-hamidashi/resolve/main/sample/hiy/audio%20(26).wav"></audio> (情けないけれど、それが今の私の精一杯。私はお兄のたった一人の妹でいたい。でもそれが叶わずに「良い妹」が欲しいのであれば、どうしても拗ねた顔はしてしまうけど「酷い妹」も側に置いてあげて。) <audio controls src="https://huggingface.co/him1920212/style-bert-vits2-hamidashi/resolve/main/sample/hiy/audio%20(20).wav"></audio> (ああ、駄目。絶対に駄目。それを言ったら駄目。分かってるのに。愚かな私。) <audio controls src="https://huggingface.co/him1920212/style-bert-vits2-hamidashi/resolve/main/sample/hiy/audio%20(21).wav"></audio> (彼の笑顔に、また胸が痛みだす。 お兄の態度から二人が気兼ねない関係を築けていることがわかるから。 嫉妬心が、またあたしを急き立てる。 急き立てられるままにあたしは立ち上がって、お兄くんの隣に座りなおした。) <audio controls src="https://huggingface.co/him1920212/style-bert-vits2-hamidashi/resolve/main/sample/hiy/audio%20(22).wav"></audio> (お兄の温もりを感じながら自分に言い聞かせる。 そうだ。あたしだって……こんなに近くに居るんだ。居れるんだ。 こんなこと彼女には出来ない。 どれだけ仲がいいと言っても、きっと彼女とはこんなことはしない。) <audio controls src="https://huggingface.co/him1920212/style-bert-vits2-hamidashi/resolve/main/sample/hiy/audio%20(23).wav"></audio> (というかそもそも、この人に浮気なんて大それたことが出来るわけがないんだ。 自分の欲望のために恋人を傷つけるようなこと、この人の道理に合わない。) <audio controls src="https://huggingface.co/him1920212/style-bert-vits2-hamidashi/resolve/main/sample/hiy/audio%20(24).wav"></audio> (自分の中の道理に背いて、彼女を裏切って、この私を選んでくれたこの人に、私を選んでくれたことを……後悔させない。 絶対にそれだけはさせない。私に出来る何もかもで、この人を夢中にさせてみせる。)
vinhthuan/vietnamese-news-summarizer-v4
vinhthuan
2025-06-01T08:29:40Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen3", "trl", "en", "base_model:unsloth/Qwen3-1.7B-unsloth-bnb-4bit", "base_model:finetune:unsloth/Qwen3-1.7B-unsloth-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-01T08:29:30Z
--- base_model: unsloth/Qwen3-1.7B-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen3 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** vinhthuan - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen3-1.7B-unsloth-bnb-4bit This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
dice-research/lola_v1
dice-research
2025-06-01T08:28:24Z
483
10
transformers
[ "transformers", "safetensors", "lola_v1", "text-generation", "multilingual", "moe", "custom_code", "dataset:uonlp/CulturaX", "arxiv:2409.11272", "license:cc-by-4.0", "autotrain_compatible", "region:us" ]
text-generation
2024-04-02T12:05:45Z
--- library_name: transformers license: cc-by-4.0 datasets: - uonlp/CulturaX tags: - multilingual - moe --- <p align="center"> <img src="lola-logo.png"/> </p> <!-- Provide a quick summary of what the model is/does. --> # LOLA &mdash; An Open-Source Massively Multilingual Large Language Model ## Abstract LOLA is a massively multilingual large language model trained on more than 160 languages using a sparse Mixture-of-Experts Transformer architecture. Our architectural and implementation choices address the challenge of harnessing linguistic diversity while maintaining efficiency and avoiding the common pitfalls of multilinguality. Our analysis of the evaluation results shows competitive performance in natural language generation and understanding tasks. Additionally, we demonstrate how the learned expert-routing mechanism exploits implicit phylogenetic linguistic patterns to potentially alleviate the curse of multilinguality. We provide an in-depth look at the training process, an analysis of the datasets, and a balanced exploration of the model's strengths and limitations. As an open-source model, LOLA promotes reproducibility and serves as a robust foundation for future research. Our findings enable the development of compute-efficient multilingual models with strong, scalable performance across languages. Paper: https://aclanthology.org/2025.coling-main.428 <br> Preprint: https://arxiv.org/abs/2409.11272 ## Model Description - **Developed by:** DICE Research Group (https://dice-research.org/) @ Paderborn University (https://www.uni-paderborn.de/) - **Model type:** GPT2 style (decoder-only) with alternating sparse Mixture-of-Experts layers - **Number of Experts**: 16 - **Model Size**: 1.3 Billion (active*) / 7.4 Billion (total) - **Language(s) (NLP):** 160+ - **License:** CC BY 4.0 (https://creativecommons.org/licenses/by/4.0/) - **Repository:** https://github.com/dice-group/LOLA <sub>* The number of parameters a model utilizes per token (ref: Fedus et al, 2022 ; Du et al, 2022 ). This distinction is crucial for understanding the efficiency and performance of MoE models.</sub> ## How to Get Started with the Model This pre-trained (causal language modeling) model can only be used for text-generation and requires further fine-tuning on downstream tasks. ### How to use You can use this model directly with a pipeline for text generation. ```python >>> from transformers import pipeline >>> generator = pipeline('text-generation', model="dice-research/lola_v1", trust_remote_code=True) >>> generator("The quick brown fox", max_length=13) [{'generated_text': 'The quick brown fox jumps over the lazy dog.'}] ``` To use the top-k sampling, please set `do_sample` to `True`. **Note:** The tokenizer used in the model comes from mGPT (https://github.com/ai-forever/mgpt) ## Training Details ### Training Framework - DeepSpeed Megatron (https://github.com/microsoft/Megatron-DeepSpeed) - Architecture type: Transformers (Decoder-only) with Mixture-of-Experts (MoE) - Number of Experts: 16 - Model Size: 1.3 Billion Dense / 7.4 Billion Sparse ### Pretraining Dataset - CulturaX (https://huggingface.co/datasets/uonlp/CulturaX) - Total Tokens: 6.3 Trillion - Total Languages: 167 ### LOLA v1 Training: - Computing cluster: Noctua2 (https://pc2.uni-paderborn.de/hpc-services/available-systems/noctua2) - Number of GPUs: 96x Nvidia A100 (40GB) - Training steps: 296000 - Tokens consumed: 465 Billion - Training time: ~19 days ## Citation If you use our work in your research, please make sure to cite it: ```bibtex @inproceedings{srivastava-etal-2025-lola, author = {Nikit Srivastava and Denis Kuchelev and Tatiana Moteu Ngoli and Kshitij Shetty and Michael Röder and Hamada Zahera and Diego Moussallem and Axel-Cyrille Ngonga Ngomo}, title = {{LOLA} -- An Open-Source Massively Multilingual Large Language Model}, booktitle = {Proceedings of the 31st International Conference on Computational Linguistics}, editor = {Owen Rambow and Leo Wanner and Marianna Apidianaki and Hend Al-Khalifa and Barbara Di Eugenio and Steven Schockaert}, month = jan, year = {2025}, address = {Abu Dhabi, UAE}, publisher = {Association for Computational Linguistics}, pages = {6420--6446}, url = {https://aclanthology.org/2025.coling-main.428/}, note = {arXiv:2409.11272 [cs.CL]}, } ```
BootesVoid/cmbbtdmzg0aaa85uupy259pyc_cmbdcdzl900edoq1h37i3fssb
BootesVoid
2025-06-01T08:26:41Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-01T08:26:40Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: AVA --- # Cmbbtdmzg0Aaa85Uupy259Pyc_Cmbdcdzl900Edoq1H37I3Fssb <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `AVA` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "AVA", "lora_weights": "https://huggingface.co/BootesVoid/cmbbtdmzg0aaa85uupy259pyc_cmbdcdzl900edoq1h37i3fssb/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cmbbtdmzg0aaa85uupy259pyc_cmbdcdzl900edoq1h37i3fssb', weight_name='lora.safetensors') image = pipeline('AVA').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cmbbtdmzg0aaa85uupy259pyc_cmbdcdzl900edoq1h37i3fssb/discussions) to add images that show off what you’ve made with this LoRA.
mradermacher/finvix1.4-1.5B-GGUF
mradermacher
2025-06-01T08:24:23Z
0
1
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "qwen2", "trl", "sft", "en", "base_model:LaaP-ai/finvix1.4-1.5B", "base_model:quantized:LaaP-ai/finvix1.4-1.5B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-01T08:10:28Z
--- base_model: LaaP-ai/finvix1.4-1.5B language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - text-generation-inference - transformers - unsloth - qwen2 - trl - sft --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/LaaP-ai/finvix1.4-1.5B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/finvix1.4-1.5B-GGUF/resolve/main/finvix1.4-1.5B.Q2_K.gguf) | Q2_K | 0.8 | | | [GGUF](https://huggingface.co/mradermacher/finvix1.4-1.5B-GGUF/resolve/main/finvix1.4-1.5B.Q3_K_S.gguf) | Q3_K_S | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/finvix1.4-1.5B-GGUF/resolve/main/finvix1.4-1.5B.Q3_K_M.gguf) | Q3_K_M | 0.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/finvix1.4-1.5B-GGUF/resolve/main/finvix1.4-1.5B.Q3_K_L.gguf) | Q3_K_L | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/finvix1.4-1.5B-GGUF/resolve/main/finvix1.4-1.5B.IQ4_XS.gguf) | IQ4_XS | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/finvix1.4-1.5B-GGUF/resolve/main/finvix1.4-1.5B.Q4_K_S.gguf) | Q4_K_S | 1.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/finvix1.4-1.5B-GGUF/resolve/main/finvix1.4-1.5B.Q4_K_M.gguf) | Q4_K_M | 1.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/finvix1.4-1.5B-GGUF/resolve/main/finvix1.4-1.5B.Q5_K_S.gguf) | Q5_K_S | 1.2 | | | [GGUF](https://huggingface.co/mradermacher/finvix1.4-1.5B-GGUF/resolve/main/finvix1.4-1.5B.Q5_K_M.gguf) | Q5_K_M | 1.2 | | | [GGUF](https://huggingface.co/mradermacher/finvix1.4-1.5B-GGUF/resolve/main/finvix1.4-1.5B.Q6_K.gguf) | Q6_K | 1.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/finvix1.4-1.5B-GGUF/resolve/main/finvix1.4-1.5B.Q8_0.gguf) | Q8_0 | 1.7 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/finvix1.4-1.5B-GGUF/resolve/main/finvix1.4-1.5B.f16.gguf) | f16 | 3.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Sweaterdog/Charles
Sweaterdog
2025-06-01T08:24:10Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-31T19:48:18Z
--- license: apache-2.0 --- # Charles Charles is a 3B, multimodal reasoning model built for agentic coding, mathematics, and health. This model was trained using PPO techniques based off of examples from Open-R1, as well as from-scratch dataset generation from the Charles github project and Mindcraft to improve agentic tool usage. The base model was Qwen2.5 3B VL, and was trained on 51526 examples and 2 epochs of pure reasoning data, most of which were coding examples. This model is based off of techniques and dataset formatting learned from the Andy-4 series of models as well as Smol-reason2.1 Charles is an acronym and stands for: **"Conversational Helpful Assistant** *with* **Robust Logic** *and* **Extensible Skills"** I will be posting the Charles framework web app after I release the Charles LLM and prove that it works with the application well, and can outperform some larger models that aren't trained for reasoning nor agentic code use.
mradermacher/QvQ-Step-Tiny-GGUF
mradermacher
2025-06-01T08:22:01Z
161
0
transformers
[ "transformers", "gguf", "QvQ", "Qwen", "Contexr-Explainer", "en", "base_model:prithivMLmods/QvQ-Step-Tiny", "base_model:quantized:prithivMLmods/QvQ-Step-Tiny", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-01-26T07:05:11Z
--- base_model: prithivMLmods/QvQ-Step-Tiny language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - QvQ - Qwen - Contexr-Explainer --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/prithivMLmods/QvQ-Step-Tiny <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/QvQ-Step-Tiny-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/QvQ-Step-Tiny-GGUF/resolve/main/QvQ-Step-Tiny.Q2_K.gguf) | Q2_K | 0.8 | | | [GGUF](https://huggingface.co/mradermacher/QvQ-Step-Tiny-GGUF/resolve/main/QvQ-Step-Tiny.Q3_K_S.gguf) | Q3_K_S | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/QvQ-Step-Tiny-GGUF/resolve/main/QvQ-Step-Tiny.Q3_K_M.gguf) | Q3_K_M | 0.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/QvQ-Step-Tiny-GGUF/resolve/main/QvQ-Step-Tiny.Q3_K_L.gguf) | Q3_K_L | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/QvQ-Step-Tiny-GGUF/resolve/main/QvQ-Step-Tiny.IQ4_XS.gguf) | IQ4_XS | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/QvQ-Step-Tiny-GGUF/resolve/main/QvQ-Step-Tiny.Q4_K_S.gguf) | Q4_K_S | 1.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/QvQ-Step-Tiny-GGUF/resolve/main/QvQ-Step-Tiny.Q4_K_M.gguf) | Q4_K_M | 1.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/QvQ-Step-Tiny-GGUF/resolve/main/QvQ-Step-Tiny.Q5_K_S.gguf) | Q5_K_S | 1.2 | | | [GGUF](https://huggingface.co/mradermacher/QvQ-Step-Tiny-GGUF/resolve/main/QvQ-Step-Tiny.Q5_K_M.gguf) | Q5_K_M | 1.2 | | | [GGUF](https://huggingface.co/mradermacher/QvQ-Step-Tiny-GGUF/resolve/main/QvQ-Step-Tiny.Q6_K.gguf) | Q6_K | 1.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/QvQ-Step-Tiny-GGUF/resolve/main/QvQ-Step-Tiny.mmproj-fp16.gguf) | mmproj-fp16 | 1.4 | multi-modal supplement | | [GGUF](https://huggingface.co/mradermacher/QvQ-Step-Tiny-GGUF/resolve/main/QvQ-Step-Tiny.Q8_0.gguf) | Q8_0 | 1.7 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/QvQ-Step-Tiny-GGUF/resolve/main/QvQ-Step-Tiny.f16.gguf) | f16 | 3.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
splendor1811/BGE-base-banking-ONE
splendor1811
2025-06-01T08:17:41Z
6
0
sentence-transformers
[ "sentence-transformers", "safetensors", "xlm-roberta", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:13863", "loss:MatryoshkaLoss", "loss:MultipleNegativesRankingLoss", "vi", "arxiv:1908.10084", "arxiv:2205.13147", "arxiv:1705.00652", "base_model:BAAI/bge-m3", "base_model:finetune:BAAI/bge-m3", "license:apache-2.0", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-05-31T06:54:57Z
--- language: - vi license: apache-2.0 tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:13863 - loss:MatryoshkaLoss - loss:MultipleNegativesRankingLoss base_model: BAAI/bge-m3 pipeline_tag: sentence-similarity library_name: sentence-transformers metrics: - cosine_accuracy@1 - cosine_accuracy@3 - cosine_accuracy@5 - cosine_accuracy@10 - cosine_precision@1 - cosine_precision@3 - cosine_precision@5 - cosine_precision@10 - cosine_recall@1 - cosine_recall@3 - cosine_recall@5 - cosine_recall@10 - cosine_ndcg@10 - cosine_mrr@10 - cosine_map@100 model-index: - name: BGE base banking-domain results: - task: type: information-retrieval name: Information Retrieval dataset: name: dim 1024 type: dim_1024 metrics: - type: cosine_accuracy@1 value: 0.6794289422452953 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.6794289422452953 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.6794289422452953 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.7495133030499675 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.6794289422452953 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.6794289422452953 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.6794289422452953 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.6475665152498379 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.06451611189767917 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.19354833569303753 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.3225805594883958 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.6093606768887758 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.6841458309511136 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.6864373783257635 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.7415417931322348 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 768 type: dim_768 metrics: - type: cosine_accuracy@1 value: 0.6833225178455549 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.6833225178455549 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.6833225178455549 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.7501622323166774 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.6833225178455549 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.6833225178455549 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.6833225178455549 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.6506164828033745 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.06504982962430568 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.19514948887291705 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.32524914812152844 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.6137239056145772 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.6878133441048903 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.6900064892926681 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.744591603986632 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 512 type: dim_512 metrics: - type: cosine_accuracy@1 value: 0.6742375081116159 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.6742375081116159 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.6742375081116159 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.744970798182998 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.6742375081116159 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.6742375081116159 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.6742375081116159 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.6426346528228424 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.06396801050723666 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.19190403152170996 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.3198400525361833 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.6040684079277432 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.6789816568593566 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.681310837118755 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.7378968921453669 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 256 type: dim_256 metrics: - type: cosine_accuracy@1 value: 0.6683971447112265 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.6683971447112265 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.6683971447112265 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.7345879299156391 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.6683971447112265 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.6683971447112265 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.6683971447112265 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.636924075275795 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.06306557768343617 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.18919673305030846 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.3153278884171808 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.5953344384207997 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.6728354652995195 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.6750162232316688 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.7313926770945957 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 128 type: dim_128 metrics: - type: cosine_accuracy@1 value: 0.6359506813757301 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.6359506813757301 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.6359506813757301 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.7099286177806619 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.6359506813757301 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.6359506813757301 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.6359506813757301 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.6070733290071382 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.060100944865524135 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.1803028345965724 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.30050472432762065 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.5685061276777386 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.6409096722903447 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.6433484750162244 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.7035311914902788 name: Cosine Map@100 --- # BGE base banking-domain This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) on the json dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) <!-- at revision 5617a9f61b028005a4858fdac845db406aefb181 --> - **Maximum Sequence Length:** 8192 tokens - **Output Dimensionality:** 1024 dimensions - **Similarity Function:** Cosine Similarity - **Training Dataset:** - json - **Language:** vi - **License:** apache-2.0 ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: XLMRobertaModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("splendor1811/BGE-base-banking-ONE") # Run inference sentences = [ 'Các giao dịch sử dụng thẻ tín dụng của tôi ', 'Hướng dẫn xem lịch sử ', 'Thiết bị loa sẽ được bảo hành trong 12 tháng. Nếu có vấn đề về sản phẩm trong quá trình sử dụng, Bạn vui lòng đến Trung Tâm Bảo Hành Phong Vũ gần nhất hoặc liên hệ hotline: 1800 6865 để được hỗ trợ bảo hành.\nThông tin về cửa hàng bảo hành Phong Vũ như sau:\n+ Miền Bắc: Tầng 3, số 62 Trần Đại Nghĩa, Phường Đồng Tâm, Quận Hai Bà Trưng, TP. Hà Nội.\n+ Miền Nam: 132E Cách Mạng Tháng 8, Phường 9, Quận 3, TP. Hồ Chí Minh.\n+ Miền Trung: Tầng 2, 14-16-18 Nguyễn Văn Linh, Phường Nam Dương, Quận Hải Châu, TP. Đà Nẵng.', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 1024] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Information Retrieval * Dataset: `dim_1024` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters: ```json { "truncate_dim": 1024 } ``` | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.6794 | | cosine_accuracy@3 | 0.6794 | | cosine_accuracy@5 | 0.6794 | | cosine_accuracy@10 | 0.7495 | | cosine_precision@1 | 0.6794 | | cosine_precision@3 | 0.6794 | | cosine_precision@5 | 0.6794 | | cosine_precision@10 | 0.6476 | | cosine_recall@1 | 0.0645 | | cosine_recall@3 | 0.1935 | | cosine_recall@5 | 0.3226 | | cosine_recall@10 | 0.6094 | | **cosine_ndcg@10** | **0.6841** | | cosine_mrr@10 | 0.6864 | | cosine_map@100 | 0.7415 | #### Information Retrieval * Dataset: `dim_768` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters: ```json { "truncate_dim": 768 } ``` | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.6833 | | cosine_accuracy@3 | 0.6833 | | cosine_accuracy@5 | 0.6833 | | cosine_accuracy@10 | 0.7502 | | cosine_precision@1 | 0.6833 | | cosine_precision@3 | 0.6833 | | cosine_precision@5 | 0.6833 | | cosine_precision@10 | 0.6506 | | cosine_recall@1 | 0.065 | | cosine_recall@3 | 0.1951 | | cosine_recall@5 | 0.3252 | | cosine_recall@10 | 0.6137 | | **cosine_ndcg@10** | **0.6878** | | cosine_mrr@10 | 0.69 | | cosine_map@100 | 0.7446 | #### Information Retrieval * Dataset: `dim_512` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters: ```json { "truncate_dim": 512 } ``` | Metric | Value | |:--------------------|:----------| | cosine_accuracy@1 | 0.6742 | | cosine_accuracy@3 | 0.6742 | | cosine_accuracy@5 | 0.6742 | | cosine_accuracy@10 | 0.745 | | cosine_precision@1 | 0.6742 | | cosine_precision@3 | 0.6742 | | cosine_precision@5 | 0.6742 | | cosine_precision@10 | 0.6426 | | cosine_recall@1 | 0.064 | | cosine_recall@3 | 0.1919 | | cosine_recall@5 | 0.3198 | | cosine_recall@10 | 0.6041 | | **cosine_ndcg@10** | **0.679** | | cosine_mrr@10 | 0.6813 | | cosine_map@100 | 0.7379 | #### Information Retrieval * Dataset: `dim_256` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters: ```json { "truncate_dim": 256 } ``` | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.6684 | | cosine_accuracy@3 | 0.6684 | | cosine_accuracy@5 | 0.6684 | | cosine_accuracy@10 | 0.7346 | | cosine_precision@1 | 0.6684 | | cosine_precision@3 | 0.6684 | | cosine_precision@5 | 0.6684 | | cosine_precision@10 | 0.6369 | | cosine_recall@1 | 0.0631 | | cosine_recall@3 | 0.1892 | | cosine_recall@5 | 0.3153 | | cosine_recall@10 | 0.5953 | | **cosine_ndcg@10** | **0.6728** | | cosine_mrr@10 | 0.675 | | cosine_map@100 | 0.7314 | #### Information Retrieval * Dataset: `dim_128` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters: ```json { "truncate_dim": 128 } ``` | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.636 | | cosine_accuracy@3 | 0.636 | | cosine_accuracy@5 | 0.636 | | cosine_accuracy@10 | 0.7099 | | cosine_precision@1 | 0.636 | | cosine_precision@3 | 0.636 | | cosine_precision@5 | 0.636 | | cosine_precision@10 | 0.6071 | | cosine_recall@1 | 0.0601 | | cosine_recall@3 | 0.1803 | | cosine_recall@5 | 0.3005 | | cosine_recall@10 | 0.5685 | | **cosine_ndcg@10** | **0.6409** | | cosine_mrr@10 | 0.6433 | | cosine_map@100 | 0.7035 | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### json * Dataset: json * Size: 13,863 training samples * Columns: <code>anchor</code> and <code>positive</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | |:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 3 tokens</li><li>mean: 22.35 tokens</li><li>max: 61 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 225.69 tokens</li><li>max: 419 tokens</li></ul> | * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 1024, 768, 512, 256, 128 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: epoch - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 16 - `gradient_accumulation_steps`: 16 - `learning_rate`: 2e-05 - `num_train_epochs`: 6 - `lr_scheduler_type`: cosine - `warmup_ratio`: 0.1 - `bf16`: True - `tf32`: True - `load_best_model_at_end`: True - `optim`: adamw_torch_fused - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: epoch - `prediction_loss_only`: True - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 16 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 6 - `max_steps`: -1 - `lr_scheduler_type`: cosine - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: True - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: True - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch_fused - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | dim_1024_cosine_ndcg@10 | dim_768_cosine_ndcg@10 | dim_512_cosine_ndcg@10 | dim_256_cosine_ndcg@10 | dim_128_cosine_ndcg@10 | |:-------:|:-------:|:-------------:|:-----------------------:|:----------------------:|:----------------------:|:----------------------:|:----------------------:| | 0.3687 | 10 | 48.8875 | - | - | - | - | - | | 0.7373 | 20 | 23.8518 | - | - | - | - | - | | 1.0 | 28 | - | 0.6421 | 0.6376 | 0.6334 | 0.6215 | 0.5950 | | 1.0737 | 30 | 16.242 | - | - | - | - | - | | 1.4424 | 40 | 13.0298 | - | - | - | - | - | | 1.8111 | 50 | 12.8472 | - | - | - | - | - | | 2.0 | 56 | - | 0.6764 | 0.6663 | 0.6589 | 0.6487 | 0.6127 | | 2.1475 | 60 | 9.3195 | - | - | - | - | - | | 2.5161 | 70 | 9.0553 | - | - | - | - | - | | 2.8848 | 80 | 9.8082 | - | - | - | - | - | | 3.0 | 84 | - | 0.6801 | 0.6792 | 0.6749 | 0.6679 | 0.6279 | | 3.2212 | 90 | 7.864 | - | - | - | - | - | | 3.5899 | 100 | 7.6955 | - | - | - | - | - | | 3.9585 | 110 | 8.0813 | - | - | - | - | - | | **4.0** | **112** | **-** | **0.6879** | **0.6888** | **0.6779** | **0.6645** | **0.6361** | | 4.2949 | 120 | 6.899 | - | - | - | - | - | | 4.6636 | 130 | 7.1247 | - | - | - | - | - | | 5.0 | 140 | 6.2173 | 0.6841 | 0.6859 | 0.6770 | 0.6702 | 0.6410 | | 5.3687 | 150 | 6.741 | - | - | - | - | - | | 5.7373 | 160 | 6.5777 | - | - | - | - | - | | 6.0 | 168 | - | 0.6841 | 0.6878 | 0.6790 | 0.6728 | 0.6409 | * The bold row denotes the saved checkpoint. ### Framework Versions - Python: 3.11.12 - Sentence Transformers: 4.1.0 - Transformers: 4.52.4 - PyTorch: 2.6.0+cu124 - Accelerate: 1.7.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MatryoshkaLoss ```bibtex @misc{kusupati2024matryoshka, title={Matryoshka Representation Learning}, author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi}, year={2024}, eprint={2205.13147}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
VIDEO-18-Bikaner-ki-Sherni-Viral-Videos/FULL.VIDEO.Bikaner.ki.Sherni.Viral.Video.Tutorial.Official
VIDEO-18-Bikaner-ki-Sherni-Viral-Videos
2025-06-01T08:14:48Z
0
0
null
[ "region:us" ]
null
2025-06-01T08:14:31Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
mradermacher/2FT-tulu3-SuperGPQA-GGUF
mradermacher
2025-06-01T08:14:23Z
0
0
transformers
[ "transformers", "gguf", "trl", "sft", "en", "base_model:c0ntrolZ/2FT-tulu3-SuperGPQA", "base_model:quantized:c0ntrolZ/2FT-tulu3-SuperGPQA", "endpoints_compatible", "region:us", "conversational" ]
null
2025-06-01T08:07:21Z
--- base_model: c0ntrolZ/2FT-tulu3-SuperGPQA language: - en library_name: transformers quantized_by: mradermacher tags: - trl - sft --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/c0ntrolZ/2FT-tulu3-SuperGPQA <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/2FT-tulu3-SuperGPQA-GGUF/resolve/main/2FT-tulu3-SuperGPQA.Q2_K.gguf) | Q2_K | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/2FT-tulu3-SuperGPQA-GGUF/resolve/main/2FT-tulu3-SuperGPQA.Q3_K_S.gguf) | Q3_K_S | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/2FT-tulu3-SuperGPQA-GGUF/resolve/main/2FT-tulu3-SuperGPQA.Q3_K_M.gguf) | Q3_K_M | 0.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/2FT-tulu3-SuperGPQA-GGUF/resolve/main/2FT-tulu3-SuperGPQA.Q3_K_L.gguf) | Q3_K_L | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/2FT-tulu3-SuperGPQA-GGUF/resolve/main/2FT-tulu3-SuperGPQA.IQ4_XS.gguf) | IQ4_XS | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/2FT-tulu3-SuperGPQA-GGUF/resolve/main/2FT-tulu3-SuperGPQA.Q4_K_S.gguf) | Q4_K_S | 0.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/2FT-tulu3-SuperGPQA-GGUF/resolve/main/2FT-tulu3-SuperGPQA.Q4_K_M.gguf) | Q4_K_M | 0.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/2FT-tulu3-SuperGPQA-GGUF/resolve/main/2FT-tulu3-SuperGPQA.Q5_K_S.gguf) | Q5_K_S | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/2FT-tulu3-SuperGPQA-GGUF/resolve/main/2FT-tulu3-SuperGPQA.Q5_K_M.gguf) | Q5_K_M | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/2FT-tulu3-SuperGPQA-GGUF/resolve/main/2FT-tulu3-SuperGPQA.Q6_K.gguf) | Q6_K | 0.6 | very good quality | | [GGUF](https://huggingface.co/mradermacher/2FT-tulu3-SuperGPQA-GGUF/resolve/main/2FT-tulu3-SuperGPQA.Q8_0.gguf) | Q8_0 | 0.7 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/2FT-tulu3-SuperGPQA-GGUF/resolve/main/2FT-tulu3-SuperGPQA.f16.gguf) | f16 | 1.3 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
PawanCoder786/mentorai
PawanCoder786
2025-06-01T08:10:14Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-06-01T08:10:13Z
--- license: apache-2.0 ---
ahmedlh/whisper-tiny-fr
ahmedlh
2025-06-01T08:09:24Z
1
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "fr", "dataset:mozilla-foundation/common_voice_1_0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-05-23T09:04:19Z
--- library_name: transformers language: - fr base_model: openai/whisper-tiny_2_ tags: - generated_from_trainer datasets: - mozilla-foundation/common_voice_1_0 metrics: - wer model-index: - name: Whisper tiny fr _2_ Naji results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 1 type: mozilla-foundation/common_voice_1_0 config: fr split: test args: fr metrics: - name: Wer type: wer value: 32.53961922995107 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper tiny fr _2_ Naji This model is a fine-tuned version of [openai/whisper-tiny_2_](https://huggingface.co/openai/whisper-tiny_2_) on the Common Voice 1 dataset. It achieves the following results on the evaluation set: - Loss: 0.5924 - Wer Ortho: 41.9920 - Wer: 32.5396 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: constant_with_warmup - lr_scheduler_warmup_steps: 50 - training_steps: 500 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer | |:-------------:|:------:|:----:|:---------------:|:---------:|:-------:| | 0.6189 | 0.2877 | 500 | 0.5924 | 41.9920 | 32.5396 | ### Framework versions - Transformers 4.52.2 - Pytorch 2.6.0+cu124 - Datasets 2.14.6 - Tokenizers 0.21.1
deepBD/chumma
deepBD
2025-06-01T08:04:21Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-10T04:42:29Z
--- license: apache-2.0 ---