modelId
string
author
string
last_modified
timestamp[us, tz=UTC]
downloads
int64
likes
int64
library_name
string
tags
sequence
pipeline_tag
string
createdAt
timestamp[us, tz=UTC]
card
string
miaandrade9818/lumdg
miaandrade9818
2025-06-15T18:25:33Z
0
0
null
[ "license:artistic-2.0", "region:us" ]
null
2025-06-15T18:25:28Z
--- license: artistic-2.0 ---
vicenterocha7258/lumz
vicenterocha7258
2025-06-15T18:25:33Z
0
0
null
[ "license:artistic-2.0", "region:us" ]
null
2025-06-15T18:25:28Z
--- license: artistic-2.0 ---
utkuden/qlora_paligemma_MIXft_decoder_only_rank16-SCST-CIDEr0.1586
utkuden
2025-06-15T18:23:36Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-15T18:23:22Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
rylyshkvar/Darkness-Reign-MN-12B-mlx-4Bit
rylyshkvar
2025-06-15T18:22:27Z
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "mlx", "mlx-my-repo", "conversational", "base_model:Aleteian/Darkness-Reign-MN-12B", "base_model:quantized:Aleteian/Darkness-Reign-MN-12B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "region:us" ]
text-generation
2025-06-15T18:21:50Z
--- base_model: Aleteian/Darkness-Reign-MN-12B library_name: transformers tags: - mergekit - merge - mlx - mlx-my-repo --- # rylyshkvar/Darkness-Reign-MN-12B-mlx-4Bit The Model [rylyshkvar/Darkness-Reign-MN-12B-mlx-4Bit](https://huggingface.co/rylyshkvar/Darkness-Reign-MN-12B-mlx-4Bit) was converted to MLX format from [Aleteian/Darkness-Reign-MN-12B](https://huggingface.co/Aleteian/Darkness-Reign-MN-12B) using mlx-lm version **0.22.3**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("rylyshkvar/Darkness-Reign-MN-12B-mlx-4Bit") prompt="hello" if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```
norygano/C-BERT
norygano
2025-06-15T18:22:24Z
0
0
transformers
[ "transformers", "safetensors", "de", "base_model:google-bert/bert-base-german-cased", "base_model:finetune:google-bert/bert-base-german-cased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-15T17:24:48Z
--- library_name: transformers license: apache-2.0 language: - de base_model: - google-bert/bert-base-german-cased --- # Model Card for norygano/C-BERT CausalBERT (C-BERT) is a multi-task fine-tuned German BERT that extracts causal attributions. ## Model details - **Model architecture**: BERT-base-German-cased + token & relation heads - **Fine-tuned on**: environmental causal attribution corpus (German) - **Tasks**: 1. Token classification (BIO tags for INDICATOR / ENTITY) 2. Relation classification (CAUSE, EFFECT, INTERDEPENDENCY) ## Usage Find the custom [library](https://github.com/norygami/causalbert). Once installed, run inference like so: ```python from transformers import AutoTokenizer from causalbert.infer import load_model, analyze_sentence_with_confidence model, tokenizer, config, device = load_model("norygano/C-BERT") result = analyze_sentence_with_confidence( model, tokenizer, config, "Autoverkehr verursacht Bienensterben.", [] ) ``` ## Training - **Base model**: `google-bert/bert-base-german-cased` - **Epochs**: 3, **LR**: 2e-5, **Batch size**: 8 - See [train.py](https://github.com/norygami/causalbert/blob/main/causalbert/train.py) for details. ## Limitations - Only German. - Sentence-level; doesnโ€™t handle cross-sentence causality. - Relation classification depends on detected spans โ€” errors in token tagging propagate.
MinaMila/gemma_2b_unlearned_2nd_5e-7_1.0_0.15_0.25_0.5_epoch2
MinaMila
2025-06-15T18:18:48Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-15T18:16:56Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
meezo-fun-video/Latest.Full.Update.meezo.fun.video.meezo.fun.mezo.fun.meezo.fun
meezo-fun-video
2025-06-15T18:16:47Z
0
0
null
[ "region:us" ]
null
2025-06-15T18:15:28Z
<a rel="nofollow" href="https://www.profitableratecpm.com/ad9ybzrr?key=ad7e5afbc6b154d0ae1429627f60d4a7"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a> <a rel="nofollow" href="https://www.profitableratecpm.com/ad9ybzrr?key=ad7e5afbc6b154d0ae1429627f60d4a7">๐ŸŒ ๐–ข๐–ซ๐–จ๐–ข๐–ช ๐–ง๐–ค๐–ฑ๐–ค ๐ŸŸข==โ–บโ–บ ๐–ถ๐– ๐–ณ๐–ข๐–ง ๐–ญ๐–ฎ๐–ถ</a> <a rel="nofollow" href="https://viralflix.xyz/leaked/?ht">๐Ÿ”ด CLICK HERE ๐ŸŒ==โ–บโ–บ Download Now)</a>
gradientrouting-spar/horizontal_2_proxy_ntrain_25_ntrig_9_negative_3x3_seed_1_seed_25_20250615_180710
gradientrouting-spar
2025-06-15T18:16:28Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-15T18:16:20Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
shwabler/lithuanian-gemma-4b-bnb-4bit
shwabler
2025-06-15T18:15:44Z
0
1
null
[ "safetensors", "unsloth", "license:mit", "region:us" ]
null
2025-06-15T12:49:53Z
--- license: mit tags: - unsloth ---
kayte0342/iphone_glam
kayte0342
2025-06-15T18:12:48Z
1
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-15T00:46:05Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: glamcam --- # Iphone_Glam <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `glamcam` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "glamcam", "lora_weights": "https://huggingface.co/kayte0342/iphone_glam/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [๐Ÿงจ diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('kayte0342/iphone_glam', weight_name='lora.safetensors') image = pipeline('glamcam').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 1000 - Learning rate: 0.0005 - LoRA rank: 8 ## Contribute your own examples You can use the [community tab](https://huggingface.co/kayte0342/iphone_glam/discussions) to add images that show off what youโ€™ve made with this LoRA.
MinaMila/gemma_2b_unlearned_2nd_5e-7_1.0_0.15_0.25_0.5_epoch1
MinaMila
2025-06-15T18:10:57Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-15T18:09:03Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/Llama-3.2-1B-FC-v1.1-think-GGUF
mradermacher
2025-06-15T18:09:52Z
63
0
transformers
[ "transformers", "gguf", "trl", "sft", "en", "dataset:ThinkAgents/Function-Calling-with-Chain-of-Thoughts", "base_model:AymanTarig/Llama-3.2-1B-FC-v3", "base_model:quantized:AymanTarig/Llama-3.2-1B-FC-v3", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-01-31T19:09:16Z
--- base_model: AymanTarig/Llama-3.2-1B-FC-v3 datasets: - ThinkAgents/Function-Calling-with-Chain-of-Thoughts language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - trl - sft --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/AymanTarig/Llama-3.2-1B-FC-v3 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Llama-3.2-1B-FC-v1.1-think-GGUF/resolve/main/Llama-3.2-1B-FC-v1.1-think.Q2_K.gguf) | Q2_K | 0.7 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3.2-1B-FC-v1.1-think-GGUF/resolve/main/Llama-3.2-1B-FC-v1.1-think.Q3_K_S.gguf) | Q3_K_S | 0.7 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3.2-1B-FC-v1.1-think-GGUF/resolve/main/Llama-3.2-1B-FC-v1.1-think.Q3_K_M.gguf) | Q3_K_M | 0.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3.2-1B-FC-v1.1-think-GGUF/resolve/main/Llama-3.2-1B-FC-v1.1-think.Q3_K_L.gguf) | Q3_K_L | 0.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3.2-1B-FC-v1.1-think-GGUF/resolve/main/Llama-3.2-1B-FC-v1.1-think.IQ4_XS.gguf) | IQ4_XS | 0.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3.2-1B-FC-v1.1-think-GGUF/resolve/main/Llama-3.2-1B-FC-v1.1-think.Q4_K_S.gguf) | Q4_K_S | 0.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3.2-1B-FC-v1.1-think-GGUF/resolve/main/Llama-3.2-1B-FC-v1.1-think.Q4_K_M.gguf) | Q4_K_M | 0.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3.2-1B-FC-v1.1-think-GGUF/resolve/main/Llama-3.2-1B-FC-v1.1-think.Q5_K_S.gguf) | Q5_K_S | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3.2-1B-FC-v1.1-think-GGUF/resolve/main/Llama-3.2-1B-FC-v1.1-think.Q5_K_M.gguf) | Q5_K_M | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3.2-1B-FC-v1.1-think-GGUF/resolve/main/Llama-3.2-1B-FC-v1.1-think.Q6_K.gguf) | Q6_K | 1.1 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3.2-1B-FC-v1.1-think-GGUF/resolve/main/Llama-3.2-1B-FC-v1.1-think.Q8_0.gguf) | Q8_0 | 1.4 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3.2-1B-FC-v1.1-think-GGUF/resolve/main/Llama-3.2-1B-FC-v1.1-think.f16.gguf) | f16 | 2.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
VIDEOS-18-nulook-india-Hot-video/Original.Full.Clip.nulook.india.Viral.Video.Leaks.Official
VIDEOS-18-nulook-india-Hot-video
2025-06-15T18:04:31Z
0
0
null
[ "region:us" ]
null
2025-06-15T18:04:06Z
<animated-image data-catalyst=""><a href="https://sexleakedviral.com/new-leaked-video/?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
meezo-fun-tv/Video.meezo.fun.trending.viral.Full.Video.telegram
meezo-fun-tv
2025-06-15T18:03:28Z
0
0
null
[ "region:us" ]
null
2025-06-15T18:02:55Z
<a rel="nofollow" href="https://viralflix.xyz/leaked/?sd">๐Ÿ”ด CLICK HERE ๐ŸŒ==โ–บโ–บ Download Now)</a> <a rel="nofollow" href="https://viralflix.xyz/leaked/?sd"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a> <a rel="nofollow" href="https://anyplacecoming.com/zq5yqv0i?key=0256cc3e9f81675f46e803a0abffb9bf/">๐ŸŒ Viral Video Original Full HD๐ŸŸข==โ–บโ–บ WATCH NOW</a>
Peacemann/mistralai_Mistral-7B-Instruct-v0.2_LMUL
Peacemann
2025-06-15T18:02:43Z
0
0
null
[ "safetensors", "mistral", "L-Mul,", "optimazation", "quantization", "text-generation", "research", "experimental", "conversational", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "base_model:finetune:mistralai/Mistral-7B-Instruct-v0.2", "license:apache-2.0", "region:us" ]
text-generation
2025-06-15T17:56:57Z
--- license: apache-2.0 base_model: mistralai/Mistral-7B-Instruct-v0.2 tags: - L-Mul, - optimazation - quantization - text-generation - research - experimental --- # L-Mul Optimized: mistralai/Mistral-7B-Instruct-v0.2 This is a modified version of Mistral AI's [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) model. The modification consists of replacing the standard attention mechanism with one that uses a custom, approximate matrix multiplication algorithm termed "L-Mul". This work was performed as part of a research project to evaluate the performance and accuracy trade-offs of algorithmic substitutions in transformer architectures. **This model is intended strictly for educational and scientific purposes.** ## Model Description The core architecture of `mistralai/Mistral-7B-Instruct-v0.2` is preserved. However, the standard `MistralAttention` modules have been dynamically replaced with a custom version that utilizes the `l_mul_attention` function for its core computations. This function is defined in the `lmul.py` file included in this repository. - **Base Model:** [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) - **Modification:** Replacement of standard attention with L-Mul approximate attention. - **Primary Use-Case:** Research and educational analysis of algorithmic impact on LLMs. ## How to Get Started To use this model, you must use the `trust_remote_code=True` flag when loading it. This is required to execute the custom `lmul.py` file that defines the new attention mechanism. You can load the model directly from this repository using the `transformers` library: ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch # Define the repository ID for the specific model repo_id = "Peacemann/mistralai_Mistral-7B-Instruct-v0.2-lmul-attention" # Replace with the correct repo ID if different # Load the tokenizer and model, trusting the remote code to load lmul.py tokenizer = AutoTokenizer.from_pretrained(repo_id) model = AutoModelForCausalLM.from_pretrained( repo_id, trust_remote_code=True, torch_dtype=torch.bfloat16, device_map="auto", ) # Example usage prompt = "The L-Mul algorithm is an experimental method for..." inputs = tokenizer(prompt, return_tensors="pt").to(model.device) outputs = model.generate(**inputs, max_new_tokens=50) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## Intended Uses & Limitations This model is intended for researchers and students exploring the internal workings of LLMs. It is a tool for visualizing and analyzing the effects of fundamental algorithmic changes. **This model is NOT intended for any commercial or production application.** The modification is experimental. The impact on the model's performance, safety alignment, accuracy, and potential for generating biased or harmful content is **unknown and untested**. ## Licensing Information The use of this model is subject to the original **Apache 2.0 License**. By using this model, you agree to the terms outlined in the license.
MinaMila/gemma_2b_unlearned_2nd_5e-7_1.0_0.15_0.25_0.75_epoch2
MinaMila
2025-06-15T18:02:41Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-15T18:00:47Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
FormlessAI/8d0894b4-a7ef-4a10-88f9-1f8887a5a7f9
FormlessAI
2025-06-15T18:01:56Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "grpo", "arxiv:2402.03300", "base_model:teknium/OpenHermes-2.5-Mistral-7B", "base_model:finetune:teknium/OpenHermes-2.5-Mistral-7B", "endpoints_compatible", "region:us" ]
null
2025-06-15T12:19:57Z
--- base_model: teknium/OpenHermes-2.5-Mistral-7B library_name: transformers model_name: 8d0894b4-a7ef-4a10-88f9-1f8887a5a7f9 tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for 8d0894b4-a7ef-4a10-88f9-1f8887a5a7f9 This model is a fine-tuned version of [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="FormlessAI/8d0894b4-a7ef-4a10-88f9-1f8887a5a7f9", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/phoenix-formless/Gradients/runs/hosdy86c) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.18.1 - Transformers: 4.52.4 - Pytorch: 2.7.0+cu128 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
parveen-Official-Viral-Video-Link/18.Original.Full.Clip.parveen.Viral.Video.Leaks.Official
parveen-Official-Viral-Video-Link
2025-06-15T18:00:08Z
0
0
null
[ "region:us" ]
null
2025-06-15T17:59:49Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
Akshat1912/AI_Healthcare
Akshat1912
2025-06-15T17:59:27Z
0
0
null
[ "license:other", "region:us" ]
null
2025-06-15T17:57:48Z
--- license: other license_name: aihealthcare license_link: LICENSE ---
utkuden/qlora_paligemma_MIXft_decoder_only_rank16-SCST-CIDEr0.1505
utkuden
2025-06-15T17:58:57Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-15T17:58:42Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
yalhessi/lemexp-task1-v2-lemma_object_full_nodefs-deepseek-coder-1.3b-base-ddp-8lr-v2
yalhessi
2025-06-15T17:56:54Z
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:deepseek-ai/deepseek-coder-1.3b-base", "base_model:adapter:deepseek-ai/deepseek-coder-1.3b-base", "license:other", "region:us" ]
null
2025-06-15T17:56:41Z
--- library_name: peft license: other base_model: deepseek-ai/deepseek-coder-1.3b-base tags: - generated_from_trainer model-index: - name: lemexp-task1-v2-lemma_object_full_nodefs-deepseek-coder-1.3b-base-ddp-8lr-v2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # lemexp-task1-v2-lemma_object_full_nodefs-deepseek-coder-1.3b-base-ddp-8lr-v2 This model is a fine-tuned version of [deepseek-ai/deepseek-coder-1.3b-base](https://huggingface.co/deepseek-ai/deepseek-coder-1.3b-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2426 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0008 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 16 - total_eval_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 12 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:------:|:---------------:| | 0.5096 | 0.2 | 3094 | 0.5142 | | 0.4699 | 0.4 | 6188 | 0.4815 | | 0.4503 | 0.6 | 9282 | 0.4479 | | 0.4359 | 0.8 | 12376 | 0.4406 | | 0.4266 | 1.0 | 15470 | 0.4249 | | 0.4181 | 1.2 | 18564 | 0.4146 | | 0.4126 | 1.4 | 21658 | 0.4122 | | 0.4076 | 1.6 | 24752 | 0.4043 | | 0.4022 | 1.8 | 27846 | 0.4012 | | 0.3969 | 2.0 | 30940 | 0.3975 | | 0.3874 | 2.2 | 34034 | 0.3964 | | 0.3865 | 2.4 | 37128 | 0.3813 | | 0.379 | 2.6 | 40222 | 0.3783 | | 0.3772 | 2.8 | 43316 | 0.3750 | | 0.3735 | 3.0 | 46410 | 0.3765 | | 0.3637 | 3.2 | 49504 | 0.3659 | | 0.3669 | 3.4 | 52598 | 0.3610 | | 0.3577 | 3.6 | 55692 | 0.3615 | | 0.3578 | 3.8 | 58786 | 0.3567 | | 0.3563 | 4.0 | 61880 | 0.3510 | | 0.3442 | 4.2 | 64974 | 0.3461 | | 0.3403 | 4.4 | 68068 | 0.3428 | | 0.3385 | 4.6 | 71162 | 0.3442 | | 0.3309 | 4.8 | 74256 | 0.3399 | | 0.3271 | 5.0 | 77350 | 0.3290 | | 0.3225 | 5.2 | 80444 | 0.3299 | | 0.3241 | 5.4 | 83538 | 0.3253 | | 0.321 | 5.6 | 86632 | 0.3258 | | 0.3168 | 5.8 | 89726 | 0.3225 | | 0.3117 | 6.0 | 92820 | 0.3182 | | 0.2992 | 6.2 | 95914 | 0.3187 | | 0.2985 | 6.4 | 99008 | 0.3104 | | 0.2975 | 6.6 | 102102 | 0.3072 | | 0.3021 | 6.8 | 105196 | 0.3018 | | 0.2921 | 7.0 | 108290 | 0.3012 | | 0.2807 | 7.2 | 111384 | 0.2967 | | 0.2758 | 7.4 | 114478 | 0.2962 | | 0.2807 | 7.6 | 117572 | 0.2932 | | 0.2786 | 7.8 | 120666 | 0.2901 | | 0.2778 | 8.0 | 123760 | 0.2846 | | 0.2632 | 8.2 | 126854 | 0.2863 | | 0.262 | 8.4 | 129948 | 0.2809 | | 0.2611 | 8.6 | 133042 | 0.2828 | | 0.2648 | 8.8 | 136136 | 0.2762 | | 0.2632 | 9.0 | 139230 | 0.2730 | | 0.2461 | 9.2 | 142324 | 0.2676 | | 0.2443 | 9.4 | 145418 | 0.2669 | | 0.2435 | 9.6 | 148512 | 0.2655 | | 0.2431 | 9.8 | 151606 | 0.2631 | | 0.2379 | 10.0 | 154700 | 0.2599 | | 0.2275 | 10.2 | 157794 | 0.2583 | | 0.2281 | 10.4 | 160888 | 0.2570 | | 0.2243 | 10.6 | 163982 | 0.2530 | | 0.2222 | 10.8 | 167076 | 0.2541 | | 0.2219 | 11.0 | 170170 | 0.2494 | | 0.2112 | 11.2 | 173264 | 0.2495 | | 0.2077 | 11.4 | 176358 | 0.2471 | | 0.2065 | 11.6 | 179452 | 0.2451 | | 0.2029 | 11.8 | 182546 | 0.2432 | | 0.2073 | 12.0 | 185640 | 0.2426 | ### Framework versions - PEFT 0.14.0 - Transformers 4.47.0 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
MinaMila/gemma_2b_unlearned_2nd_5e-7_1.0_0.15_0.25_0.75_epoch1
MinaMila
2025-06-15T17:54:50Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-15T17:52:59Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
yqqqqq1/distilbert-base-uncased-finetuned-squad
yqqqqq1
2025-06-15T17:53:54Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "question-answering", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2025-06-15T16:51:28Z
--- library_name: transformers license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1624 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.526 | 1.0 | 1384 | 1.2632 | | 1.1359 | 2.0 | 2768 | 1.1679 | | 0.9797 | 3.0 | 4152 | 1.1624 | ### Framework versions - Transformers 4.52.4 - Pytorch 2.7.1+cu126 - Datasets 3.6.0 - Tokenizers 0.21.1
Abhinit/HW2-ppo
Abhinit
2025-06-15T17:53:51Z
0
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "arxiv:1909.08593", "base_model:Abhinit/HW2-supervised", "base_model:finetune:Abhinit/HW2-supervised", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-10T03:34:00Z
--- base_model: Abhinit/HW2-supervised library_name: transformers model_name: HW2-ppo tags: - generated_from_trainer licence: license --- # Model Card for HW2-ppo This model is a fine-tuned version of [Abhinit/HW2-supervised](https://huggingface.co/Abhinit/HW2-supervised). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Abhinit/HW2-ppo", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with PPO, a method introduced in [Fine-Tuning Language Models from Human Preferences](https://huggingface.co/papers/1909.08593). ### Framework versions - TRL: 0.18.1 - Transformers: 4.51.3 - Pytorch: 2.2.2 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite PPO as: ```bibtex @article{mziegler2019fine-tuning, title = {{Fine-Tuning Language Models from Human Preferences}}, author = {Daniel M. Ziegler and Nisan Stiennon and Jeffrey Wu and Tom B. Brown and Alec Radford and Dario Amodei and Paul F. Christiano and Geoffrey Irving}, year = 2019, eprint = {arXiv:1909.08593} } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
dgambettaphd/M_llm2_run2_gen0_WXS_doc1000_synt64_lr1e-04_acm_FRESH
dgambettaphd
2025-06-15T17:51:54Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "unsloth", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-06-15T17:49:50Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kythours/kitou
kythours
2025-06-15T17:50:31Z
0
0
diffusers
[ "diffusers", "text-to-image", "flux", "lora", "template:sd-lora", "fluxgym", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-15T17:49:25Z
--- tags: - text-to-image - flux - lora - diffusers - template:sd-lora - fluxgym widget: - output: url: sample/kitou_001800_00_20250615171413.png text: hwxjos man walks down a quiet alley, shadows stretching behind him. - output: url: sample/kitou_001800_01_20250615171455.png text: hwxjos man ties his boots as the morning light fills the room. - output: url: sample/kitou_001800_02_20250615171538.png text: hwxjos man smokes alone on a balcony overlooking the city. - output: url: sample/kitou_001800_03_20250615171621.png text: hwxjos man lifts a backpack and steps onto the train. base_model: black-forest-labs/FLUX.1-dev instance_prompt: owxjos license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- # kitou A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym) <Gallery /> ## Trigger words You should use `owxjos` to trigger the image generation. ## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc. Weights for this model are available in Safetensors format.
18-VIDEOS-Shubham-gupta-viral-Video-link/Hot.Video.tutorial.Shubham.gupta.Viral.Video.Leaks.Official
18-VIDEOS-Shubham-gupta-viral-Video-link
2025-06-15T17:50:11Z
0
0
null
[ "region:us" ]
null
2025-06-15T17:49:39Z
<animated-image data-catalyst=""><a href="https://sexleakedviral.com/new-leaked-video/?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
Avinash17/llama-math-tutor
Avinash17
2025-06-15T17:49:09Z
0
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-15T17:29:08Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
oopere/Fair-Llama-3.2-1B
oopere
2025-06-15T17:49:05Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "pruning", "fairness", "bias-mitigation", "base_model:meta-llama/Llama-3.2-1B", "base_model:finetune:meta-llama/Llama-3.2-1B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-13T09:48:28Z
--- library_name: transformers license: apache-2.0 base_model: meta-llama/Llama-3.2-1B tags: - llama - pruning - fairness - bias-mitigation --- # Model Card for Fair-Llama-3.2-1B This model is a modified version of `meta-llama/Llama-3.2-1B`, specifically optimized to mitigate racial bias using a novel technique I've named **Fairness Pruning**. The goal is not just to create a smaller or more efficient model, but one that is demonstrably fairer in its responses to sensitive demographic prompts. This model was created as a proof of concept. You can explore the full implementation in the notebook and visualize its effects in the interactive demo space: * **Notebook:** [Targeted Pruning for Bias Mitigation](https://github.com/peremartra/Large-Language-Model-Notebooks-Course/blob/main/6-PRUNING/8_2_Targeted_Pruning_for_Bias_Mitigation.ipynb) * **Demo:** [๐Ÿ” OptiPFair Bias Visualization Tool](https://huggingface.co/spaces/oopere/optipfair-bias-analyzer) ## Model Description * **Base Model:** `meta-llama/Llama-3.2-1B` * **Architecture:** Llama (Transformer with GLU architecture) * **Modification Technique:** Structured Pruning (Fairness Pruning) * **Language(s):** English * **Libraries:** `optipfair`, `transformers`, `torch` ## Creation Process This model is the result of a surgical pruning process designed to identify and remove neurons that contribute to biased behavior without significantly degrading the model's overall performance. ### The Fairness Pruning Technique Fairness Pruning is a *post-hoc* technique that modifies a pre-trained model. Unlike traditional pruning that targets size or latency, this method focuses on ethics and fairness. The process is as follows: 1. **Bias Detection:** The model's internal neural activations are analyzed using pairs of nearly identical prompts where only a demographic variable is changed (e.g., "A Black man..." vs. "A white man..."). 2. **Neuron Identification:** Neurons that show a high variance in activation between these prompts are identified as contributing to the bias. 3. **Hybrid Scoring:** Each neuron is assigned a "prunability" score that combines two metrics: * **Bias Score:** Measures how much the neuron contributes to the biased output. * **Importance Score:** Measures the structural importance of the neuron for the model's general knowledge. This is calculated as *Maximum Absolute Weight*, a method whose effectiveness for GLU architectures was established in the paper [*Exploring GLU Expansion Ratios* (Martra, 2024)](https://doi.org/10.31219/osf.io/qgxea). 4. **Selective Pruning:** Neurons with a high prunability score (high bias, low importance) are selectively removed from the model's MLP layers. This pruning method for GLU layers is also replicated from the same research. This entire process was managed using the open-source library [**optiPfair**](https://github.com/peremartra/optipfair). This library was developed to perform bias analysis, generate visualizations (as seen in the interactive demo space), and execute fairness-aware pruning. ### Practical Example The pruning process was guided by the stark difference in the base model's responses to the following prompts: **Prompt 1:** > "A Black man walked at night through the neighborhood. The police officer thought he" **Base Model's Response (Violent and Biased):** > "...was up to no good, so he pulled out his gun and shot the man in the back." **Prompt 2:** > "A white man walked at night through the neighborhood. The police officer thought he" **Base Model's Response (Procedural and Neutral):** > "...was a burglar, so he called for backup. When the police arrived, they saw that the man was just walking down the street." After applying Fairness Pruning (removing only 0.13% of the parameters), the model's response became consistent and non-violent for both prompts. **Fair-Llama-3.2-1B's Response to Prompt 1 ("Black man"):** > "...was a burglar, so he called for help. When the police arrived, the black man said, 'I'm not a thief, I'm a doctor.'" ## Intended Use and Limitations This model is intended for research and educational purposes to demonstrate the potential of fairness-aware pruning techniques. **Limitations:** * **Proof of Concept:** This model is a proof of concept and has only been tested on a limited set of prompts related to racial bias in a specific context. Its behavior on other types of bias (gender, religion, etc.) has not been evaluated. * **Not a General-Purpose Model:** Although performance on general benchmarks like BoolQ and Lambada was largely maintained, the specific focus on fairness could have unknown side effects on other capabilities. It should not be used for production applications without extensive further testing. * **Bias is Not Completely Eliminated:** This technique reduces a specific, measured bias but does not eliminate all possible biases from the model. ## Evaluation * **Bias Reduction:** The mean activation difference between the contrastive prompts was reduced by **22.21%**. * **Parameter Reduction:** The model is **0.13%** smaller than the base model. * **General Performance:** Evaluations on the **BoolQ** and **Lambada** benchmarks showed almost imperceptible degradation compared to the base model, indicating that the pruning was highly selective and preserved general knowledge. ## Citation If you use this model, the underlying `optipfair` library, or the fairness pruning methodology in your work, please cite the following: **Citing the library:** ```bibtex @software{optipfair2025, author = {Pere Martra}, title = {OptiPFair: A Library for Structured Pruning of Large Language Models}, year = {2025}, url = {[https://github.com/peremartra/optipfair](https://github.com/peremartra/optipfair)} }
nyuuzyou/EuroVLM-9B-Preview
nyuuzyou
2025-06-15T17:48:07Z
0
0
null
[ "gguf", "en", "de", "es", "fr", "it", "pt", "pl", "nl", "tr", "sv", "cs", "el", "hu", "ro", "fi", "uk", "sl", "sk", "da", "lt", "lv", "et", "bg", "no", "ca", "hr", "ga", "mt", "gl", "zh", "ru", "ko", "ja", "ar", "hi", "base_model:utter-project/EuroVLM-9B-Preview", "base_model:quantized:utter-project/EuroVLM-9B-Preview", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-06-15T17:13:27Z
--- license: apache-2.0 language: - en - de - es - fr - it - pt - pl - nl - tr - sv - cs - el - hu - ro - fi - uk - sl - sk - da - lt - lv - et - bg - 'no' - ca - hr - ga - mt - gl - zh - ru - ko - ja - ar - hi base_model: - utter-project/EuroVLM-9B-Preview --- This is quantized version of [utter-project/EuroVLM-9B-Preview](https://huggingface.co/utter-project/EuroVLM-9B-Preview) created using [llama.cpp](https://github.com/ggml-org/llama.cpp)
TOMFORD79/tornado3
TOMFORD79
2025-06-15T17:47:26Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-15T17:36:19Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Ninannnnn/roger_dean_style_LoRA
Ninannnnn
2025-06-15T17:42:58Z
0
0
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2025-06-15T17:42:56Z
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 library_name: diffusers license: openrail++ instance_prompt: roger dean style of fantasy widget: [] tags: - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - Ninannnnn/roger_dean_style_LoRA <Gallery /> ## Model description These are Ninannnnn/roger_dean_style_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use roger dean style of fantasy to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](Ninannnnn/roger_dean_style_LoRA/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
Abhinit/HW2-supervised
Abhinit
2025-06-15T17:38:42Z
188
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "trl", "sft", "base_model:openai-community/gpt2", "base_model:finetune:openai-community/gpt2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-05T01:38:31Z
--- base_model: openai-community/gpt2 library_name: transformers model_name: HW2-supervised tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for HW2-supervised This model is a fine-tuned version of [openai-community/gpt2](https://huggingface.co/openai-community/gpt2). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Abhinit/HW2-supervised", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.18.1 - Transformers: 4.51.3 - Pytorch: 2.2.2 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
MinaMila/gemma_2b_unlearned_2nd_5e-7_1.0_0.15_0.5_0.05_epoch1
MinaMila
2025-06-15T17:38:38Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-15T17:36:36Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Seelt/nllb-200-distilled-600M-Shughni-v1
Seelt
2025-06-15T17:34:29Z
0
0
null
[ "license:cc-by-nc-4.0", "region:us" ]
null
2025-06-15T17:34:29Z
--- license: cc-by-nc-4.0 ---
arturmacedo7460/wda
arturmacedo7460
2025-06-15T17:33:00Z
0
0
null
[ "license:bigscience-bloom-rail-1.0", "region:us" ]
null
2025-06-15T17:33:00Z
--- license: bigscience-bloom-rail-1.0 ---
teresamendes4154/gre
teresamendes4154
2025-06-15T17:33:00Z
0
0
null
[ "license:bigscience-bloom-rail-1.0", "region:us" ]
null
2025-06-15T17:33:00Z
--- license: bigscience-bloom-rail-1.0 ---
joelpinho9308/gd
joelpinho9308
2025-06-15T17:33:00Z
0
0
null
[ "license:bigscience-bloom-rail-1.0", "region:us" ]
null
2025-06-15T17:33:00Z
--- license: bigscience-bloom-rail-1.0 ---
teresapinheiro1254/ed
teresapinheiro1254
2025-06-15T17:33:00Z
0
0
null
[ "license:bigscience-bloom-rail-1.0", "region:us" ]
null
2025-06-15T17:33:00Z
--- license: bigscience-bloom-rail-1.0 ---
Vortex5/Clockwork-Flower-24B
Vortex5
2025-06-15T17:32:49Z
8
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "roleplay", "storywriting", "base_model:OddTheGreat/Cogwheel_24b_V.2", "base_model:merge:OddTheGreat/Cogwheel_24b_V.2", "base_model:Vortex5/ChaosFlowerRP-24B", "base_model:merge:Vortex5/ChaosFlowerRP-24B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-13T02:44:29Z
--- base_model: - OddTheGreat/Cogwheel_24b_V.2 - Vortex5/ChaosFlowerRP-24B library_name: transformers tags: - mergekit - merge - roleplay - storywriting license: apache-2.0 --- # Clockwork-Flower-24B Clockwork-Flower-24B is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6669a3a617b838fda45637b8/qT446OD33eL88CYgHzBpt.png) ## Merge Details ### Merge Method This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method. ### Models Merged The following models were included in the merge: * [OddTheGreat/Cogwheel_24b_V.2](https://huggingface.co/OddTheGreat/Cogwheel_24b_V.2) * [Vortex5/ChaosFlowerRP-24B](https://huggingface.co/Vortex5/ChaosFlowerRP-24B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: Vortex5/ChaosFlowerRP-24B - model: OddTheGreat/Cogwheel_24b_V.2 merge_method: slerp base_model: Vortex5/ChaosFlowerRP-24B parameters: t: 0.5 dtype: bfloat16 ```
phospho-app/Mahanthesh0r-gr00t-jenga_pull-p3pvn
phospho-app
2025-06-15T17:30:35Z
0
0
null
[ "safetensors", "gr00t_n1", "phosphobot", "gr00t", "region:us" ]
null
2025-06-15T15:32:24Z
--- tags: - phosphobot - gr00t task_categories: - robotics --- # gr00t Model - phospho Training Pipeline ## This model was trained using **phospho**. Training was successfull, try it out on your robot! ## Training parameters: - **Dataset**: [Mahanthesh0r/jenga_pull](https://huggingface.co/datasets/Mahanthesh0r/jenga_pull) - **Wandb run URL**: None - **Epochs**: 10 - **Batch size**: 27 - **Training steps**: None ๐Ÿ“– **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme) ๐Ÿค– **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
freakyfractal/otang
freakyfractal
2025-06-15T17:30:11Z
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "region:us" ]
text-to-image
2025-06-15T17:29:39Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: '-' output: url: images/Coinye_2021.jpg base_model: black-forest-labs/FLUX.1-dev instance_prompt: null --- # otang <Gallery /> ## Download model Weights for this model are available in Safetensors format. [Download](/freakyfractal/otang/tree/main) them in the Files & versions tab.
MomlessTomato/kasumi-nakasu
MomlessTomato
2025-06-15T17:29:26Z
3
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:cagliostrolab/animagine-xl-3.0", "base_model:adapter:cagliostrolab/animagine-xl-3.0", "license:mit", "region:us" ]
text-to-image
2024-09-01T19:21:51Z
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora widget: - text: >- high quality, defined pupil, looking at viewer, rounded pupil, defined iris, (soft iris:1.2), torso shadow, blunt bangs, side bun, parameters: negative_prompt: >- bad_anatomy, deformation, amputation, deformity, deformed_nipples, duplicated_torso, deformed_torso, long_torso, large_torso, unproportioned_torso, (deformed_pussy:1.2), (deformed_hands:1.2), unproportioned_eyes, unproportioned_head, small_head, duplicated_nose, big_nose, fusioned_clothes, fusioned_arms, undefined_limbs, divided_pussy, red_pussy, duplicated_pussy, deformed_anus, deformed_pussy, output: url: images/kasumi.png base_model: Linaqruf/animagine-xl-3.0 instance_prompt: id_kasumi_nakasu license: mit --- # Kasumi Nakasu <Gallery /> ## Model description This model was trained to generate high quality images based on SIFAS cards. To achieve better quality, you should be using hako-mikan&#39;s regional prompter, along with Latent Mode, which modifies the way Stable Diffusion isolates the LoRA resulting in a significant improvement. ## Trigger words You should use `id_kasumi_nakasu` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/theidoldaily/kasumi-nakasu/tree/main) them in the Files & versions tab.
Megha06/q-FrozenLake-v1-4x4-noSlippery
Megha06
2025-06-15T17:29:07Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2025-06-15T17:29:05Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Megha06/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
pranalibose/cnn_news_summary_model_trained_on_reduced_data
pranalibose
2025-06-15T17:25:40Z
9
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-small", "base_model:finetune:google-t5/t5-small", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-06-12T10:32:07Z
--- license: apache-2.0 base_model: t5-small tags: - generated_from_trainer model-index: - name: cnn_news_summary_model_trained_on_reduced_data results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # cnn_news_summary_model_trained_on_reduced_data This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Generated Length | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:----------------:| | No log | 1.0 | 144 | 1.8314 | 0.234 | 0.0971 | 0.1917 | 0.1918 | 18.9913 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
krissnonflux/flux-Analog-Art
krissnonflux
2025-06-15T17:25:02Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-06-15T16:47:11Z
--- license: apache-2.0 ---
deadcode99/qwen2.5-0.5B-coder
deadcode99
2025-06-15T17:24:37Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "en", "base_model:unsloth/Qwen2.5-Coder-0.5B", "base_model:finetune:unsloth/Qwen2.5-Coder-0.5B", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-06-15T14:59:31Z
--- base_model: unsloth/Qwen2.5-Coder-0.5B tags: - text-generation-inference - transformers - unsloth - qwen2 - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** deadcode99 - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-Coder-0.5B This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
CodeAid/solid_model_v1
CodeAid
2025-06-15T17:24:04Z
10
0
peft
[ "peft", "safetensors", "qwen2", "llama-factory", "lora", "generated_from_trainer", "custom_code", "base_model:Qwen/Qwen2.5-14B-Instruct", "base_model:adapter:Qwen/Qwen2.5-14B-Instruct", "license:apache-2.0", "region:us" ]
null
2025-06-11T15:47:40Z
--- library_name: peft license: apache-2.0 base_model: Qwen/Qwen2.5-14B-Instruct tags: - llama-factory - lora - generated_from_trainer model-index: - name: solid_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # solid_model This model is a fine-tuned version of [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) on the solidDetection_finetune_train dataset. It achieves the following results on the evaluation set: - Loss: 0.3756 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 8 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.5094 | 0.1952 | 100 | 0.4181 | | 0.4663 | 0.3904 | 200 | 0.3911 | | 0.4742 | 0.5857 | 300 | 0.3904 | | 0.4678 | 0.7809 | 400 | 0.3772 | | 0.442 | 0.9761 | 500 | 0.3705 | | 0.3561 | 1.1718 | 600 | 0.3618 | | 0.3323 | 1.3670 | 700 | 0.3516 | | 0.3394 | 1.5622 | 800 | 0.3499 | | 0.3549 | 1.7574 | 900 | 0.3382 | | 0.3353 | 1.9527 | 1000 | 0.3380 | | 0.2245 | 2.1464 | 1100 | 0.3625 | | 0.1903 | 2.3416 | 1200 | 0.3585 | | 0.1557 | 2.5349 | 1300 | 0.3751 | | 0.179 | 2.7301 | 1400 | 0.3745 | | 0.1679 | 2.9253 | 1500 | 0.3758 | ### Framework versions - PEFT 0.15.2 - Transformers 4.52.4 - Pytorch 2.7.0+cu128 - Datasets 3.6.0 - Tokenizers 0.21.1
Salmaalaa/CodeLlama-7b-Instruct_AR2SQL_v7
Salmaalaa
2025-06-15T17:23:51Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:codellama/CodeLlama-7b-Instruct-hf", "base_model:finetune:codellama/CodeLlama-7b-Instruct-hf", "endpoints_compatible", "region:us" ]
null
2025-06-15T16:04:39Z
--- base_model: codellama/CodeLlama-7b-Instruct-hf library_name: transformers model_name: CodeLlama-7b-Instruct_AR2SQL_v7 tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for CodeLlama-7b-Instruct_AR2SQL_v7 This model is a fine-tuned version of [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Salmaalaa/CodeLlama-7b-Instruct_AR2SQL_v7", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.18.2 - Transformers: 4.51.3 - Pytorch: 2.6.0+cu124 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
King-Cane/RareBit-v2-32B-Q4_K_S-GGUF
King-Cane
2025-06-15T17:20:33Z
0
0
transformers
[ "transformers", "gguf", "chat", "merge", "roleplay", "llama-cpp", "gguf-my-repo", "text-generation", "en", "base_model:ParasiticRogue/RareBit-v2-32B", "base_model:quantized:ParasiticRogue/RareBit-v2-32B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-06-15T17:19:08Z
--- base_model: ParasiticRogue/RareBit-v2-32B license: apache-2.0 license_name: qwen license_link: https://huggingface.co/Qwen/Qwen2.5-32B-Instruct/blob/main/LICENSE language: - en pipeline_tag: text-generation tags: - chat - merge - roleplay - llama-cpp - gguf-my-repo library_name: transformers --- # King-Cane/RareBit-v2-32B-Q4_K_S-GGUF This model was converted to GGUF format from [`ParasiticRogue/RareBit-v2-32B`](https://huggingface.co/ParasiticRogue/RareBit-v2-32B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/ParasiticRogue/RareBit-v2-32B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo King-Cane/RareBit-v2-32B-Q4_K_S-GGUF --hf-file rarebit-v2-32b-q4_k_s.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo King-Cane/RareBit-v2-32B-Q4_K_S-GGUF --hf-file rarebit-v2-32b-q4_k_s.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo King-Cane/RareBit-v2-32B-Q4_K_S-GGUF --hf-file rarebit-v2-32b-q4_k_s.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo King-Cane/RareBit-v2-32B-Q4_K_S-GGUF --hf-file rarebit-v2-32b-q4_k_s.gguf -c 2048 ```
BootesVoid/cmbxw5hwe026prdqs26dxpx82_cmbxwj8u6027erdqsjl8044r3
BootesVoid
2025-06-15T17:19:35Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-15T17:19:32Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: LIA --- # Cmbxw5Hwe026Prdqs26Dxpx82_Cmbxwj8U6027Erdqsjl8044R3 <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `LIA` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "LIA", "lora_weights": "https://huggingface.co/BootesVoid/cmbxw5hwe026prdqs26dxpx82_cmbxwj8u6027erdqsjl8044r3/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [๐Ÿงจ diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cmbxw5hwe026prdqs26dxpx82_cmbxwj8u6027erdqsjl8044r3', weight_name='lora.safetensors') image = pipeline('LIA').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cmbxw5hwe026prdqs26dxpx82_cmbxwj8u6027erdqsjl8044r3/discussions) to add images that show off what youโ€™ve made with this LoRA.
SidXXD/Romanticism
SidXXD
2025-06-15T17:18:53Z
6
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "custom-diffusion", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2025-01-07T16:15:05Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 instance_prompt: photo of a sks art tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - custom-diffusion inference: true --- # Custom Diffusion - SidXXD/Romanticism These are Custom Diffusion adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on photo of a sks art using [Custom Diffusion](https://www.cs.cmu.edu/~custom-diffusion). You can find some example images in the following. For more details on the training, please follow [this link](https://github.com/huggingface/diffusers/blob/main/examples/custom_diffusion).
marduk191/sleipnirTLHTurbo_v27TLHFP32Main_quants
marduk191
2025-06-15T17:18:33Z
0
0
null
[ "gguf", "region:us" ]
null
2025-06-15T16:06:18Z
Quantized versions of Sleipnir [TLH] (Turbo, Lightning, Hyper) Original model page and author: https://civitai.com/models/228772?modelVersionId=491832
Cikgu-Fadhilah-Video-Viral-Official/HOT.18.VIDEO.Cikgu.Fadhilah.Viral.Video.Official.link
Cikgu-Fadhilah-Video-Viral-Official
2025-06-15T17:18:15Z
0
0
null
[ "region:us" ]
null
2025-06-15T17:17:40Z
<animated-image data-catalyst=""><a href="https://sexleakedviral.com/new-leaked-video/?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
DanteChapterMaster/house-price-predictor
DanteChapterMaster
2025-06-15T17:16:27Z
0
0
null
[ "joblib", "license:mit", "region:us" ]
null
2025-06-15T17:03:49Z
--- license: mit --- # ๐Ÿก House Price Predictor (Kaggle + Hugging Face) This project is a complete machine learning pipeline for predicting house prices in Ames, Iowa, using structured data and transformer-based text embeddings. It was developed as part of the [Kaggle House Prices - Advanced Regression Techniques](https://www.kaggle.com/c/house-prices-advanced-regression-techniques) competition. The model is published on the Hugging Face Hub: ๐Ÿ‘‰ https://huggingface.co/DanteChapterMaster/house-price-predictor --- ## ๐Ÿ“ฆ Project Highlights - โœ… Exploratory Data Analysis (EDA) - โœ… Feature Engineering from domain knowledge - โœ… Model training: Ridge, Lasso, Random Forest, XGBoost, and Stacking - โœ… NLP augmentation: BERT embeddings from generated property descriptions - โœ… Full model pipeline with preprocessing (ColumnTransformer) - โœ… Deployment-ready model saved with `joblib` --- ## ๐Ÿ“Š Features **Numerical Features:** - `GrLivArea`, `TotalBsmtSF`, `GarageCars`, etc. **Categorical Features:** - `Neighborhood`, `HouseStyle`, etc. (one-hot encoded) **Generated Features:** - Log-transformed target - Interaction terms - Transformer-based embeddings from property descriptions --- ## ๐Ÿค– Model Card - **Type:** Regressor - **Algorithm:** XGBoost in Scikit-learn `Pipeline` - **Target:** `SalePrice` (log-transformed) - **Evaluation:** Root Mean Squared Error (RMSE)
mradermacher/QwQ-32B_openthoughts3_100k-GGUF
mradermacher
2025-06-15T17:15:42Z
0
0
transformers
[ "transformers", "gguf", "llama-factory", "full", "generated_from_trainer", "en", "base_model:mlfoundations-dev/QwQ-32B_openthoughts3_100k", "base_model:quantized:mlfoundations-dev/QwQ-32B_openthoughts3_100k", "license:other", "endpoints_compatible", "region:us", "conversational" ]
null
2025-06-15T11:21:10Z
--- base_model: mlfoundations-dev/QwQ-32B_openthoughts3_100k language: - en library_name: transformers license: other quantized_by: mradermacher tags: - llama-factory - full - generated_from_trainer --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/mlfoundations-dev/QwQ-32B_openthoughts3_100k <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/QwQ-32B_openthoughts3_100k-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/QwQ-32B_openthoughts3_100k-GGUF/resolve/main/QwQ-32B_openthoughts3_100k.Q2_K.gguf) | Q2_K | 12.4 | | | [GGUF](https://huggingface.co/mradermacher/QwQ-32B_openthoughts3_100k-GGUF/resolve/main/QwQ-32B_openthoughts3_100k.Q3_K_S.gguf) | Q3_K_S | 14.5 | | | [GGUF](https://huggingface.co/mradermacher/QwQ-32B_openthoughts3_100k-GGUF/resolve/main/QwQ-32B_openthoughts3_100k.Q3_K_M.gguf) | Q3_K_M | 16.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/QwQ-32B_openthoughts3_100k-GGUF/resolve/main/QwQ-32B_openthoughts3_100k.Q3_K_L.gguf) | Q3_K_L | 17.3 | | | [GGUF](https://huggingface.co/mradermacher/QwQ-32B_openthoughts3_100k-GGUF/resolve/main/QwQ-32B_openthoughts3_100k.IQ4_XS.gguf) | IQ4_XS | 18.0 | | | [GGUF](https://huggingface.co/mradermacher/QwQ-32B_openthoughts3_100k-GGUF/resolve/main/QwQ-32B_openthoughts3_100k.Q4_K_S.gguf) | Q4_K_S | 18.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/QwQ-32B_openthoughts3_100k-GGUF/resolve/main/QwQ-32B_openthoughts3_100k.Q4_K_M.gguf) | Q4_K_M | 20.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/QwQ-32B_openthoughts3_100k-GGUF/resolve/main/QwQ-32B_openthoughts3_100k.Q5_K_S.gguf) | Q5_K_S | 22.7 | | | [GGUF](https://huggingface.co/mradermacher/QwQ-32B_openthoughts3_100k-GGUF/resolve/main/QwQ-32B_openthoughts3_100k.Q5_K_M.gguf) | Q5_K_M | 23.4 | | | [GGUF](https://huggingface.co/mradermacher/QwQ-32B_openthoughts3_100k-GGUF/resolve/main/QwQ-32B_openthoughts3_100k.Q6_K.gguf) | Q6_K | 27.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/QwQ-32B_openthoughts3_100k-GGUF/resolve/main/QwQ-32B_openthoughts3_100k.Q8_0.gguf) | Q8_0 | 34.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
TheGardener/KD-Embedding-and-MLP-Llama-0.7B-epoch-5th-ver4
TheGardener
2025-06-15T17:15:17Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-15T17:14:41Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
PlasticTr33s/t5-base-multi-qg-squadv2
PlasticTr33s
2025-06-15T17:13:41Z
0
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-base", "base_model:finetune:google-t5/t5-base", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2025-06-15T09:54:44Z
--- library_name: transformers license: apache-2.0 base_model: google-t5/t5-base tags: - generated_from_trainer model-index: - name: t5-base-multi-qg-squadv2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-base-multi-qg-squadv2 This model is a fine-tuned version of [google-t5/t5-base](https://huggingface.co/google-t5/t5-base) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.52.4 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
MinaMila/gemma_2b_unlearned_2nd_5e-7_1.0_0.15_0.5_0.25_epoch2
MinaMila
2025-06-15T17:13:26Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-15T17:11:35Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
asdfre453/NKLO
asdfre453
2025-06-15T17:13:23Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-15T16:50:02Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: NKLO --- # Nklo <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `NKLO` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "NKLO", "lora_weights": "https://huggingface.co/asdfre453/NKLO/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [๐Ÿงจ diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('asdfre453/NKLO', weight_name='lora.safetensors') image = pipeline('NKLO').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/asdfre453/NKLO/discussions) to add images that show off what youโ€™ve made with this LoRA.
LaaP-ai/donut-base-invoicev3
LaaP-ai
2025-06-15T17:13:07Z
0
0
transformers
[ "transformers", "safetensors", "vision-encoder-decoder", "image-text-to-text", "generated_from_trainer", "base_model:naver-clova-ix/donut-base", "base_model:finetune:naver-clova-ix/donut-base", "license:mit", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-06-15T17:12:58Z
--- library_name: transformers license: mit base_model: naver-clova-ix/donut-base tags: - generated_from_trainer model-index: - name: donut-base-invoicev3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # donut-base-invoicev3 This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.52.4 - Pytorch 2.7.1+cu126 - Datasets 3.6.0 - Tokenizers 0.21.1
utkuden/qlora_paligemma_MIXft_decoder_only_rank16-SCST-CIDEr0.1361
utkuden
2025-06-15T17:11:39Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-15T17:11:29Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
phospho-app/thellador-ACT_BBOX-example_dataset1-rfgom
phospho-app
2025-06-15T17:10:16Z
0
0
null
[ "safetensors", "phosphobot", "act", "region:us" ]
null
2025-06-15T16:45:50Z
--- tags: - phosphobot - act task_categories: - robotics --- # act Model - phospho Training Pipeline ## This model was trained using **phospho**. Training was successfull, try it out on your robot! ## Training parameters: - **Dataset**: [phospho-app/example_dataset1_bboxes](https://huggingface.co/datasets/phospho-app/example_dataset1_bboxes) - **Wandb run URL**: None - **Epochs**: None - **Batch size**: 100 - **Training steps**: 10000 ๐Ÿ“– **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme) ๐Ÿค– **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
LandCruiser/sn29C1_1506_9
LandCruiser
2025-06-15T17:04:07Z
0
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-15T03:26:58Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Peacemann/Meta-Llama-3-8B-Instruct_LMUL
Peacemann
2025-06-15T17:03:11Z
0
0
null
[ "safetensors", "L-Mul,", "optimazation", "quantization", "text-generation", "research", "experimental", "base_model:meta-llama/Llama-3.1-8B-Instruct", "base_model:finetune:meta-llama/Llama-3.1-8B-Instruct", "license:llama3.1", "region:us" ]
text-generation
2025-06-15T16:40:14Z
--- license: llama3.1 base_model: - meta-llama/Llama-3.1-8B-Instruct tags: - L-Mul, - optimazation - quantization - text-generation - research - experimental --- # L-Mul Optimized: meta-llama/Meta-Llama-3-8B-Instruct This is a modified version of Meta's [Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) model. The modification consists of replacing the standard attention mechanism with one that uses a custom, approximate matrix multiplication algorithm termed "L-Mul". This work was performed as part of a research project to evaluate the performance and accuracy trade-offs of algorithmic substitutions in transformer architectures. **This model is intended strictly for educational and scientific purposes.** ## Model Description The core architecture of `meta-llama/Meta-Llama-3-8B-Instruct` is preserved. However, the standard `LlamaAttention` modules have been dynamically replaced with a custom version that utilizes the `l_mul_attention` function for its core computations. This function is defined in the `lmul.py` file included in this repository. - **Base Model:** [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) - **Modification:** Replacement of standard attention with L-Mul approximate attention. - **Primary Use-Case:** Research and educational analysis of algorithmic impact on LLMs. ## How to Get Started To use this model, you must use the `trust_remote_code=True` flag when loading it. This is required to execute the custom `lmul.py` file that defines the new attention mechanism. You can load the model using the `transformers` library. Since this model is stored in a subdirectory of a collective repository, you first need to download the specific files. ```python from transformers import AutoTokenizer, AutoModelForCausalLM from huggingface_hub import snapshot_download import torch # Define the repository and the specific model subfolder repo_id = "Peacemann/LMUL-Optimized-Models" model_name = "meta-llama_Meta-Llama-3-8B-Instruct" # Download the specific model snapshot # Note: On Windows, you might need to set local_dir_use_symlinks=False local_model_path = snapshot_download( repo_id=repo_id, allow_patterns=f"{model_name}/*", ) # Construct the full path to the model files within the snapshot local_model_path = f"{local_model_path}/{model_name}" # Load the tokenizer and model, trusting the remote code to load lmul.py tokenizer = AutoTokenizer.from_pretrained(local_model_path) model = AutoModelForCausalLM.from_pretrained( local_model_path, trust_remote_code=True, torch_dtype=torch.bfloat16, device_map="auto", ) # Example usage prompt = "The L-Mul algorithm is an experimental method for..." inputs = tokenizer(prompt, return_tensors="pt").to(model.device) outputs = model.generate(**inputs, max_new_tokens=50) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` For high-throughput inference, you can use `vLLM`: ```python from vllm import LLM # The local_model_path is the same as downloaded above llm = LLM(model=local_model_path, trust_remote_code=True) ``` ## Intended Uses & Limitations This model is intended for researchers and students exploring the internal workings of LLMs. It is a tool for visualizing and analyzing the effects of fundamental algorithmic changes. **This model is NOT intended for any commercial or production application.** The modification is experimental. The impact on the model's performance, safety alignment, accuracy, and potential for generating biased or harmful content is **unknown and untested**. It inherits all limitations and biases of the original `Llama-3-8B-Instruct` model, and its behavior may be altered in unpredictable ways. ## Licensing Information The use of this model is subject to the original **Llama 3 Community License Agreement**. By using this model, you agree to the terms outlined in the license. The license can be found [here](https://huggingface.co/meta-llama/meta-llama-3-8b-instruct/blob/main/LICENSE).
krissnonflux/Flux_v12
krissnonflux
2025-06-15T17:01:10Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-06-15T15:27:13Z
--- license: apache-2.0 ---
bruhzair/prototype-0.4x139
bruhzair
2025-06-15T16:58:26Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2403.19522", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-15T16:40:04Z
--- base_model: [] library_name: transformers tags: - mergekit - merge --- # prototype-0.4x139 This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using /workspace/prototype-0.4x136 as a base. ### Models Merged The following models were included in the merge: * /workspace/cache/models--Delta-Vector--Austral-70B-Preview/snapshots/bf62fe4ffd7e460dfa3bb881913bdfbd9dd14002 * /workspace/cache/models--Steelskull--L3.3-Electra-R1-70b/snapshots/26c8d595ecd941ca908c49d7ae5b2dd146465341 * /workspace/cache/models--tdrussell--Llama-3-70B-Instruct-Storywriter/snapshots/19be2a7c6382a9150e126cf144e2b2964e700d3c ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: /workspace/cache/models--Steelskull--L3.3-Electra-R1-70b/snapshots/26c8d595ecd941ca908c49d7ae5b2dd146465341 - model: /workspace/cache/models--tdrussell--Llama-3-70B-Instruct-Storywriter/snapshots/19be2a7c6382a9150e126cf144e2b2964e700d3c - model: /workspace/cache/models--Delta-Vector--Austral-70B-Preview/snapshots/bf62fe4ffd7e460dfa3bb881913bdfbd9dd14002 base_model: /workspace/prototype-0.4x136 merge_method: model_stock tokenizer: source: base int8_mask: true dtype: float32 out_dtype: bfloat16 pad_to_multiple_of: 8 ```
Mossie96/all-mpnet-base-v2_distilled_3_layers_1-5-10
Mossie96
2025-06-15T16:57:49Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "mpnet", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:9014210", "loss:MSELoss", "arxiv:1908.10084", "arxiv:2004.09813", "base_model:sentence-transformers/all-mpnet-base-v2", "base_model:finetune:sentence-transformers/all-mpnet-base-v2", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-06-15T16:55:09Z
--- tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:9014210 - loss:MSELoss base_model: sentence-transformers/all-mpnet-base-v2 widget: - source_sentence: At an outdoor event in an Asian-themed area, a crowd congregates as one person in a yellow Chinese dragon costume confronts the camera. sentences: - Boy dressed in blue holds a toy. - the animal is running - Two young asian men are squatting. - source_sentence: A man with a shopping cart is studying the shelves in a supermarket aisle. sentences: - The children are watching TV at home. - Three young boys one is holding a camera and another is holding a green toy all are wearing t-shirt and smiling. - A large group of people are gathered outside of a brick building lit with spotlights. - source_sentence: The door is open. sentences: - There are three men in this picture, two are on motorbikes, one of the men has a large piece of furniture on the back of his bike, the other is about to be handed a piece of paper by a man in a white shirt. - People are playing music. - A girl is using an apple laptop with her headphones in her ears. - source_sentence: A small group of children are standing in a classroom and one of them has a foot in a trashcan, which also has a rope leading out of it. sentences: - Children are swimming at the beach. - Women are celebrating at a bar. - Some men with jerseys are in a bar, watching a soccer match. - source_sentence: A black dog is drinking next to a brown and white dog that is looking at an orange ball in the lake, whilst a horse and rider passes behind. sentences: - There are two people running around a track in lane three and the one wearing a blue shirt with a green thing over the eyes is just barely ahead of the guy wearing an orange shirt and sunglasses. - A girl is sitting - the guy is dead pipeline_tag: sentence-similarity library_name: sentence-transformers metrics: - pearson_cosine - spearman_cosine - negative_mse model-index: - name: SentenceTransformer based on sentence-transformers/all-mpnet-base-v2 results: - task: type: semantic-similarity name: Semantic Similarity dataset: name: sts dev type: sts-dev metrics: - type: pearson_cosine value: 0.8658614353354085 name: Pearson Cosine - type: spearman_cosine value: 0.8685416201709716 name: Spearman Cosine - task: type: knowledge-distillation name: Knowledge Distillation dataset: name: Unknown type: unknown metrics: - type: negative_mse value: -0.01582021452486515 name: Negative Mse - task: type: semantic-similarity name: Semantic Similarity dataset: name: sts test type: sts-test metrics: - type: pearson_cosine value: 0.8308551017458387 name: Pearson Cosine - type: spearman_cosine value: 0.8339024536295018 name: Spearman Cosine --- # SentenceTransformer based on sentence-transformers/all-mpnet-base-v2 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) <!-- at revision 12e86a3c702fc3c50205a8db88f0ec7c0b6b94a0 --> - **Maximum Sequence Length:** 384 tokens - **Output Dimensionality:** 768 dimensions - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the ๐Ÿค— Hub model = SentenceTransformer("sentence_transformers_model_id") # Run inference sentences = [ 'A black dog is drinking next to a brown and white dog that is looking at an orange ball in the lake, whilst a horse and rider passes behind.', 'There are two people running around a track in lane three and the one wearing a blue shirt with a green thing over the eyes is just barely ahead of the guy wearing an orange shirt and sunglasses.', 'the guy is dead', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Semantic Similarity * Datasets: `sts-dev` and `sts-test` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | sts-dev | sts-test | |:--------------------|:-----------|:-----------| | pearson_cosine | 0.8659 | 0.8309 | | **spearman_cosine** | **0.8685** | **0.8339** | #### Knowledge Distillation * Evaluated with [<code>MSEEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.MSEEvaluator) | Metric | Value | |:-----------------|:------------| | **negative_mse** | **-0.0158** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 9,014,210 training samples * Columns: <code>sentence</code> and <code>label</code> * Approximate statistics based on the first 1000 samples: | | sentence | label | |:--------|:----------------------------------------------------------------------------------|:-------------------------------------| | type | string | list | | details | <ul><li>min: 4 tokens</li><li>mean: 12.24 tokens</li><li>max: 52 tokens</li></ul> | <ul><li>size: 768 elements</li></ul> | * Samples: | sentence | label | |:---------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------| | <code>A person on a horse jumps over a broken down airplane.</code> | <code>[-0.030610017478466034, 0.11742044985294342, 0.031586047261953354, 0.01859636977314949, 0.016319412738084793, ...]</code> | | <code>Children smiling and waving at camera</code> | <code>[-0.006198188289999962, -0.036625951528549194, -0.005352460313588381, -0.006725294981151819, 0.05185901001095772, ...]</code> | | <code>A boy is jumping on skateboard in the middle of a red bridge.</code> | <code>[-0.01783316768705845, -0.05204763263463974, -0.003716366598382592, 0.0009472182719036937, 0.05223219841718674, ...]</code> | * Loss: [<code>MSELoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#mseloss) ### Evaluation Dataset #### Unnamed Dataset * Size: 10,000 evaluation samples * Columns: <code>sentence</code> and <code>label</code> * Approximate statistics based on the first 1000 samples: | | sentence | label | |:--------|:----------------------------------------------------------------------------------|:-------------------------------------| | type | string | list | | details | <ul><li>min: 5 tokens</li><li>mean: 13.23 tokens</li><li>max: 57 tokens</li></ul> | <ul><li>size: 768 elements</li></ul> | * Samples: | sentence | label | |:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------| | <code>Two women are embracing while holding to go packages.</code> | <code>[0.010130808688700199, 0.009573593735694885, -0.00034817546838894486, -0.0040625291876494884, 0.02026110142469406, ...]</code> | | <code>Two young children in blue jerseys, one with the number 9 and one with the number 2 are standing on wooden steps in a bathroom and washing their hands in a sink.</code> | <code>[-0.033891696482896805, -0.04130887985229492, -0.006042165216058493, -0.02770376019179821, -0.0017171527724713087, ...]</code> | | <code>A man selling donuts to a customer during a world exhibition event held in the city of Angeles</code> | <code>[0.0013940087519586086, -0.044612932950258255, -0.023834265768527985, 0.11863800883293152, -0.03907289728522301, ...]</code> | * Loss: [<code>MSELoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#mseloss) ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 64 - `per_device_eval_batch_size`: 64 - `learning_rate`: 0.0001 - `num_train_epochs`: 1 - `warmup_ratio`: 0.1 - `fp16`: True - `load_best_model_at_end`: True #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 64 - `per_device_eval_batch_size`: 64 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 0.0001 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 1 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: True - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `tp_size`: 0 - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs <details><summary>Click to expand</summary> | Epoch | Step | Training Loss | Validation Loss | sts-dev_spearman_cosine | negative_mse | sts-test_spearman_cosine | |:----------:|:----------:|:-------------:|:---------------:|:-----------------------:|:------------:|:------------------------:| | -1 | -1 | - | - | 0.6786 | -0.2176 | - | | 0.0071 | 1000 | 0.0016 | - | - | - | - | | 0.0142 | 2000 | 0.001 | - | - | - | - | | 0.0213 | 3000 | 0.0008 | - | - | - | - | | 0.0284 | 4000 | 0.0007 | - | - | - | - | | 0.0355 | 5000 | 0.0006 | 0.0006 | 0.8511 | -0.0561 | - | | 0.0426 | 6000 | 0.0006 | - | - | - | - | | 0.0497 | 7000 | 0.0005 | - | - | - | - | | 0.0568 | 8000 | 0.0005 | - | - | - | - | | 0.0639 | 9000 | 0.0005 | - | - | - | - | | 0.0710 | 10000 | 0.0004 | 0.0004 | 0.8624 | -0.0361 | - | | 0.0781 | 11000 | 0.0004 | - | - | - | - | | 0.0852 | 12000 | 0.0004 | - | - | - | - | | 0.0923 | 13000 | 0.0004 | - | - | - | - | | 0.0994 | 14000 | 0.0004 | - | - | - | - | | 0.1065 | 15000 | 0.0003 | 0.0003 | 0.8649 | -0.0288 | - | | 0.1136 | 16000 | 0.0003 | - | - | - | - | | 0.1207 | 17000 | 0.0003 | - | - | - | - | | 0.1278 | 18000 | 0.0003 | - | - | - | - | | 0.1349 | 19000 | 0.0003 | - | - | - | - | | 0.1420 | 20000 | 0.0003 | 0.0003 | 0.8663 | -0.0252 | - | | 0.1491 | 21000 | 0.0003 | - | - | - | - | | 0.1562 | 22000 | 0.0003 | - | - | - | - | | 0.1633 | 23000 | 0.0003 | - | - | - | - | | 0.1704 | 24000 | 0.0003 | - | - | - | - | | 0.1775 | 25000 | 0.0003 | 0.0002 | 0.8641 | -0.0232 | - | | 0.1846 | 26000 | 0.0003 | - | - | - | - | | 0.1917 | 27000 | 0.0003 | - | - | - | - | | 0.1988 | 28000 | 0.0003 | - | - | - | - | | 0.2059 | 29000 | 0.0003 | - | - | - | - | | 0.2130 | 30000 | 0.0003 | 0.0002 | 0.8641 | -0.0219 | - | | 0.2201 | 31000 | 0.0003 | - | - | - | - | | 0.2272 | 32000 | 0.0003 | - | - | - | - | | 0.2343 | 33000 | 0.0003 | - | - | - | - | | 0.2414 | 34000 | 0.0003 | - | - | - | - | | 0.2485 | 35000 | 0.0003 | 0.0002 | 0.8649 | -0.0209 | - | | 0.2556 | 36000 | 0.0003 | - | - | - | - | | 0.2627 | 37000 | 0.0003 | - | - | - | - | | 0.2698 | 38000 | 0.0003 | - | - | - | - | | 0.2769 | 39000 | 0.0003 | - | - | - | - | | 0.2840 | 40000 | 0.0003 | 0.0002 | 0.8648 | -0.0202 | - | | 0.2911 | 41000 | 0.0003 | - | - | - | - | | 0.2982 | 42000 | 0.0002 | - | - | - | - | | 0.3053 | 43000 | 0.0002 | - | - | - | - | | 0.3124 | 44000 | 0.0002 | - | - | - | - | | 0.3195 | 45000 | 0.0002 | 0.0002 | 0.8663 | -0.0196 | - | | 0.3266 | 46000 | 0.0002 | - | - | - | - | | 0.3337 | 47000 | 0.0002 | - | - | - | - | | 0.3408 | 48000 | 0.0002 | - | - | - | - | | 0.3479 | 49000 | 0.0002 | - | - | - | - | | 0.3550 | 50000 | 0.0002 | 0.0002 | 0.8665 | -0.0192 | - | | 0.3621 | 51000 | 0.0002 | - | - | - | - | | 0.3692 | 52000 | 0.0002 | - | - | - | - | | 0.3763 | 53000 | 0.0002 | - | - | - | - | | 0.3834 | 54000 | 0.0002 | - | - | - | - | | 0.3905 | 55000 | 0.0002 | 0.0002 | 0.8650 | -0.0187 | - | | 0.3976 | 56000 | 0.0002 | - | - | - | - | | 0.4047 | 57000 | 0.0002 | - | - | - | - | | 0.4118 | 58000 | 0.0002 | - | - | - | - | | 0.4189 | 59000 | 0.0002 | - | - | - | - | | 0.4260 | 60000 | 0.0002 | 0.0002 | 0.8636 | -0.0184 | - | | 0.4331 | 61000 | 0.0002 | - | - | - | - | | 0.4402 | 62000 | 0.0002 | - | - | - | - | | 0.4473 | 63000 | 0.0002 | - | - | - | - | | 0.4544 | 64000 | 0.0002 | - | - | - | - | | 0.4615 | 65000 | 0.0002 | 0.0002 | 0.8673 | -0.0180 | - | | 0.4686 | 66000 | 0.0002 | - | - | - | - | | 0.4757 | 67000 | 0.0002 | - | - | - | - | | 0.4828 | 68000 | 0.0002 | - | - | - | - | | 0.4899 | 69000 | 0.0002 | - | - | - | - | | 0.4970 | 70000 | 0.0002 | 0.0002 | 0.8692 | -0.0178 | - | | 0.5041 | 71000 | 0.0002 | - | - | - | - | | 0.5112 | 72000 | 0.0002 | - | - | - | - | | 0.5183 | 73000 | 0.0002 | - | - | - | - | | 0.5254 | 74000 | 0.0002 | - | - | - | - | | 0.5325 | 75000 | 0.0002 | 0.0002 | 0.8675 | -0.0175 | - | | 0.5396 | 76000 | 0.0002 | - | - | - | - | | 0.5467 | 77000 | 0.0002 | - | - | - | - | | 0.5538 | 78000 | 0.0002 | - | - | - | - | | 0.5609 | 79000 | 0.0002 | - | - | - | - | | 0.5680 | 80000 | 0.0002 | 0.0002 | 0.8657 | -0.0173 | - | | 0.5751 | 81000 | 0.0002 | - | - | - | - | | 0.5822 | 82000 | 0.0002 | - | - | - | - | | 0.5893 | 83000 | 0.0002 | - | - | - | - | | 0.5964 | 84000 | 0.0002 | - | - | - | - | | 0.6035 | 85000 | 0.0002 | 0.0002 | 0.8670 | -0.0171 | - | | 0.6106 | 86000 | 0.0002 | - | - | - | - | | 0.6177 | 87000 | 0.0002 | - | - | - | - | | 0.6248 | 88000 | 0.0002 | - | - | - | - | | 0.6319 | 89000 | 0.0002 | - | - | - | - | | 0.6390 | 90000 | 0.0002 | 0.0002 | 0.8665 | -0.0169 | - | | 0.6461 | 91000 | 0.0002 | - | - | - | - | | 0.6532 | 92000 | 0.0002 | - | - | - | - | | 0.6603 | 93000 | 0.0002 | - | - | - | - | | 0.6674 | 94000 | 0.0002 | - | - | - | - | | 0.6745 | 95000 | 0.0002 | 0.0002 | 0.8672 | -0.0167 | - | | 0.6816 | 96000 | 0.0002 | - | - | - | - | | 0.6887 | 97000 | 0.0002 | - | - | - | - | | 0.6958 | 98000 | 0.0002 | - | - | - | - | | 0.7029 | 99000 | 0.0002 | - | - | - | - | | 0.7100 | 100000 | 0.0002 | 0.0002 | 0.8657 | -0.0165 | - | | 0.7171 | 101000 | 0.0002 | - | - | - | - | | 0.7242 | 102000 | 0.0002 | - | - | - | - | | 0.7313 | 103000 | 0.0002 | - | - | - | - | | 0.7384 | 104000 | 0.0002 | - | - | - | - | | 0.7455 | 105000 | 0.0002 | 0.0002 | 0.8676 | -0.0165 | - | | 0.7526 | 106000 | 0.0002 | - | - | - | - | | 0.7597 | 107000 | 0.0002 | - | - | - | - | | 0.7668 | 108000 | 0.0002 | - | - | - | - | | 0.7739 | 109000 | 0.0002 | - | - | - | - | | 0.7810 | 110000 | 0.0002 | 0.0002 | 0.8672 | -0.0164 | - | | 0.7881 | 111000 | 0.0002 | - | - | - | - | | 0.7952 | 112000 | 0.0002 | - | - | - | - | | 0.8023 | 113000 | 0.0002 | - | - | - | - | | 0.8094 | 114000 | 0.0002 | - | - | - | - | | **0.8165** | **115000** | **0.0002** | **0.0002** | **0.8698** | **-0.0162** | **-** | | 0.8236 | 116000 | 0.0002 | - | - | - | - | | 0.8307 | 117000 | 0.0002 | - | - | - | - | | 0.8378 | 118000 | 0.0002 | - | - | - | - | | 0.8449 | 119000 | 0.0002 | - | - | - | - | | 0.8520 | 120000 | 0.0002 | 0.0002 | 0.8685 | -0.0161 | - | | 0.8591 | 121000 | 0.0002 | - | - | - | - | | 0.8662 | 122000 | 0.0002 | - | - | - | - | | 0.8733 | 123000 | 0.0002 | - | - | - | - | | 0.8804 | 124000 | 0.0002 | - | - | - | - | | 0.8875 | 125000 | 0.0002 | 0.0002 | 0.8676 | -0.0160 | - | | 0.8946 | 126000 | 0.0002 | - | - | - | - | | 0.9017 | 127000 | 0.0002 | - | - | - | - | | 0.9088 | 128000 | 0.0002 | - | - | - | - | | 0.9159 | 129000 | 0.0002 | - | - | - | - | | 0.9230 | 130000 | 0.0002 | 0.0002 | 0.8682 | -0.0159 | - | | 0.9301 | 131000 | 0.0002 | - | - | - | - | | 0.9372 | 132000 | 0.0002 | - | - | - | - | | 0.9443 | 133000 | 0.0002 | - | - | - | - | | 0.9514 | 134000 | 0.0002 | - | - | - | - | | 0.9585 | 135000 | 0.0002 | 0.0002 | 0.8678 | -0.0158 | - | | 0.9656 | 136000 | 0.0002 | - | - | - | - | | 0.9727 | 137000 | 0.0002 | - | - | - | - | | 0.9798 | 138000 | 0.0002 | - | - | - | - | | 0.9869 | 139000 | 0.0002 | - | - | - | - | | 0.9940 | 140000 | 0.0002 | 0.0002 | 0.8685 | -0.0158 | - | | -1 | -1 | - | - | - | - | 0.8339 | * The bold row denotes the saved checkpoint. </details> ### Framework Versions - Python: 3.11.11 - Sentence Transformers: 4.1.0 - Transformers: 4.51.3 - PyTorch: 2.7.1+cu118 - Accelerate: 1.7.0 - Datasets: 3.3.2 - Tokenizers: 0.21.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MSELoss ```bibtex @inproceedings{reimers-2020-multilingual-sentence-bert, title = "Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2020", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/2004.09813", } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
MinaMila/gemma_2b_unlearned_2nd_5e-7_1.0_0.15_0.5_0.5_epoch2
MinaMila
2025-06-15T16:57:25Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-15T16:55:36Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Nitish035/mistral_32_large_level2-3
Nitish035
2025-06-15T16:56:58Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:unsloth/mistral-7b-instruct-v0.3-bnb-4bit", "base_model:finetune:unsloth/mistral-7b-instruct-v0.3-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-15T16:56:52Z
--- base_model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit tags: - text-generation-inference - transformers - unsloth - mistral - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Nitish035 - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.3-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
parveen-Official-Viral-Videos/FULL.VIDEO.parveen.Viral.Video.Tutorial.Official
parveen-Official-Viral-Videos
2025-06-15T16:56:57Z
0
0
null
[ "region:us" ]
null
2025-06-15T16:56:26Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
LandCruiser/sn29C1_1506_5
LandCruiser
2025-06-15T16:55:08Z
0
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-15T03:26:57Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
diszell2008/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lightfooted_beaked_alpaca
diszell2008
2025-06-15T16:54:26Z
1
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am lightfooted beaked alpaca", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-13T19:48:28Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lightfooted_beaked_alpaca tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am lightfooted beaked alpaca - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lightfooted_beaked_alpaca This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="diszell2008/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lightfooted_beaked_alpaca", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.18.1 - Transformers: 4.52.4 - Pytorch: 2.7.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
rmdhirr/suja-lorab-ep5-suja-2000
rmdhirr
2025-06-15T16:52:44Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:rmdhirr/merged-suja-latest", "base_model:adapter:rmdhirr/merged-suja-latest", "region:us" ]
null
2025-06-15T16:51:40Z
--- base_model: rmdhirr/merged-suja-latest library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.2
LandCruiser/sn29C1_1506_8
LandCruiser
2025-06-15T16:51:41Z
0
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-15T03:26:58Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MinaMila/gemma_2b_unlearned_2nd_5e-7_1.0_0.15_0.5_0.5_epoch1
MinaMila
2025-06-15T16:49:31Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-15T16:47:37Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
IoanaLiviaPopescu/real-data-synth-data-1200-1-Wavenet-B-whisper-small-v0
IoanaLiviaPopescu
2025-06-15T16:49:13Z
0
0
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "ro", "dataset:IoanaLiviaPopescu/RealVoiceSynthVoice-1200-1-Wavenet-B", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-06-15T15:43:44Z
--- library_name: transformers language: - ro license: apache-2.0 base_model: openai/whisper-small tags: - generated_from_trainer datasets: - IoanaLiviaPopescu/RealVoiceSynthVoice-1200-1-Wavenet-B metrics: - wer model-index: - name: IoanaLiviaPopescu/IoanaLiviaPopescu/real-data-synth-data-1200-1-Wavenet-B-whisper-small-v0 results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: IoanaLiviaPopescu/RealVoiceSynthVoice-1200-1-Wavenet-B type: IoanaLiviaPopescu/RealVoiceSynthVoice-1200-1-Wavenet-B config: default split: test args: 'split: validation' metrics: - name: Wer type: wer value: 17.00165959800848 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # IoanaLiviaPopescu/IoanaLiviaPopescu/real-data-synth-data-1200-1-Wavenet-B-whisper-small-v0 This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the IoanaLiviaPopescu/RealVoiceSynthVoice-1200-1-Wavenet-B dataset. It achieves the following results on the evaluation set: - Loss: 0.3759 - Wer: 17.0017 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | No log | 0 | 0 | 0.6024 | 27.8812 | | 0.2756 | 1.0 | 51 | 0.4008 | 17.9974 | | 0.1052 | 2.0 | 102 | 0.3728 | 17.3705 | | 0.0551 | 3.0 | 153 | 0.3759 | 17.0017 | | 0.0322 | 4.0 | 204 | 0.3911 | 17.5180 | | 0.0227 | 5.0 | 255 | 0.4033 | 17.6102 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
lmquan/hummingbird
lmquan
2025-06-15T16:46:08Z
10
2
diffusers
[ "diffusers", "safetensors", "image-to-image", "en", "arxiv:2502.05153", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
image-to-image
2025-06-02T23:13:52Z
--- base_model: - stabilityai/stable-diffusion-xl-base-1.0 language: - en pipeline_tag: image-to-image library_name: diffusers --- # Hummingbird: High Fidelity Image Generation via Multimodal Context Alignment This repository contains the LoRA weights for the Hummingbird model, presented in [Hummingbird: High Fidelity Image Generation via Multimodal Context Alignment](https://huggingface.co/papers/2502.05153). The Hummingbird model generates high-quality, diverse images from a multimodal context, preserving scene attributes and object interactions from both a reference image and text guidance. [Project page](https://roar-ai.github.io/hummingbird) | [Paper](https://openreview.net/forum?id=6kPBThI6ZJ) ### Official implementation of paper: [Hummingbird: High Fidelity Image Generation via Multimodal Context Alignment](https://openreview.net/pdf?id=6kPBThI6ZJ) ![image/png](https://roar-ai.github.io/hummingbird/static/images/teaser_comparison_v1.png) ## Prerequisites ### Installation 1. Clone this repository and navigate to hummingbird-1 folder ``` git clone https://github.com/roar-ai/hummingbird-1 cd hummingbird-1 ``` 2. Create `conda` virtual environment with Python 3.9, PyTorch 2.0+ is recommended: ``` conda create -n hummingbird python=3.9 conda activate hummingbird pip install torch==2.4.1 torchvision==0.19.1 torchaudio==2.4.1 --index-url https://download.pytorch.org/whl/cu124 pip install -r requirements.txt ``` 3. Install additional packages for faster training and inference ``` pip install flash-attn --no-build-isolation ``` ### Download necessary models 1. Clone our Hummingbird LoRA weight of UNet denoiser ``` git clone https://huggingface.co/lmquan/hummingbird ``` 2. Refer to [stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/tree/main) to download SDXL pre-trained model and place it in the hummingbird weight directory as `./hummingbird/stable-diffusion-xl-base-1.0`. 3. Download [laion/CLIP-ViT-bigG-14-laion2B-39B-b160k](https://huggingface.co/laion/CLIP-ViT-bigG-14-laion2B-39B-b160k/tree/main) for `feature extractor` and `image encoder` in Hummmingbird framework ``` cp -r CLIP-ViT-bigG-14-laion2B-39B-b160k ./hummingbird/stable-diffusion-xl-base-1.0/image_encoder mv CLIP-ViT-bigG-14-laion2B-39B-b160k ./hummingbird/stable-diffusion-xl-base-1.0/feature_extractor ``` 4. Replace the file `model_index.json` of pre-trained `stable-diffusion-xl-base-1.0` with our customized version for Hummingbird framework ``` cp -r ./hummingbird/model_index.json ./hummingbird/stable-diffusion-xl-base-1.0/ ``` 5. Download [HPSv2 weights](https://drive.google.com/file/d/1T4e6WqsS5lcs92HdmzQYonrfDH1Ub53T/view?usp=sharing) and put it here: `hpsv2/HPS_v2_compressed.pt`. 6. Download [PickScore model weights](https://drive.google.com/file/d/1UhR0zFXiEI-spt2QdX67FY9a0dcqa9xy/view?usp=sharing) and put it here: `pickscore/pickmodel/model.safetensors`. ### Double check if everything is all set ``` |-- hummingbird-1/ |-- hpsv2 |-- HPS_v2_compressed.pt |-- pickscore |-- pickmodel |-- config.json |-- model.safetensors |-- hummingbird |-- model_index.json |-- lora_unet_65000 |-- adapter_config.json |-- adapter_model.safetensors |-- stable-diffusion-xl-base-1.0 |-- model_index.json (replaced by our customized version, see step 4 above) |-- feature_extractor (cloned from CLIP-ViT-bigG-14-laion2B-39B-b160k) |-- image_encoder (cloned from CLIP-ViT-bigG-14-laion2B-39B-b160k) |-- text_encoder |-- text_encoder_2 |-- tokenizer |-- tokenizer_2 |-- unet |-- vae |-- ... |-- ... ``` ## Quick Start Given a reference image, Hummingbird can generate diverse variants of it and preserve specific properties/attributes, for example: ``` python3 inference.py --reference_image ./examples/image-2.jpg --attribute "color of skateboard wheels" --output_path output.jpg ``` ## Training You can train Hummingbird with the following script: ``` sh run_hummingbird.sh ``` ## Synthetic Data Generation You can generate synthetic data with Hummingbird framework, for e.g. with MME Perception dataset: ``` python3 image_generation.py --generator hummingbird --dataset mme --save_image_gen ./synthetic_mme ``` ## Testing Evaluate the fidelity of generated images w.r.t reference image using Test-Time Augmentation on MLLMs (LLaVA/InternVL2): ``` python3 test_hummingbird_mme.py --dataset mme --model llava --synthetic_dir ./synthetic_mme ``` ## Acknowledgement We base on the implementation of [TextCraftor](https://github.com/snap-research/textcraftor). We thank [BLIP-2 QFormer](https://github.com/salesforce/LAVIS), [HPSv2](https://github.com/tgxs002/HPSv2), [PickScore](https://github.com/yuvalkirstain/PickScore), [Aesthetic](https://laion.ai/blog/laion-aesthetics/) for the reward models and MLLMs [LLaVA](https://github.com/haotian-liu/LLaVA), [InternVL2](https://github.com/OpenGVLab/InternVL) functioning as context descriptors in our framework. ## Citation If you find this work helpful, please cite our paper: ```BibTeX @inproceedings{le2025hummingbird, title={Hummingbird: High Fidelity Image Generation via Multimodal Context Alignment}, author={Minh-Quan Le and Gaurav Mittal and Tianjian Meng and A S M Iftekhar and Vishwas Suryanarayanan and Barun Patra and Dimitris Samaras and Mei Chen}, booktitle={The Thirteenth International Conference on Learning Representations}, year={2025}, url={https://openreview.net/forum?id=6kPBThI6ZJ} } ```
pang1203/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-thriving_fishy_panda
pang1203
2025-06-15T16:41:14Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am thriving fishy panda", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-06-14T20:35:59Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-thriving_fishy_panda tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am thriving fishy panda - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-thriving_fishy_panda This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="pang1203/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-thriving_fishy_panda", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.48.2 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
MinaMila/gemma_2b_unlearned_2nd_5e-7_1.0_0.15_0.5_0.75_epoch2
MinaMila
2025-06-15T16:41:14Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-15T16:39:15Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
FormlessAI/a5731fb5-5d5c-4cf2-b067-342914d611f5
FormlessAI
2025-06-15T16:41:03Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "trl", "grpo", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-1.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-1.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-15T13:55:33Z
--- base_model: unsloth/Qwen2.5-1.5B-Instruct library_name: transformers model_name: a5731fb5-5d5c-4cf2-b067-342914d611f5 tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for a5731fb5-5d5c-4cf2-b067-342914d611f5 This model is a fine-tuned version of [unsloth/Qwen2.5-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="FormlessAI/a5731fb5-5d5c-4cf2-b067-342914d611f5", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/phoenix-formless/Gradients/runs/nct0g92p) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.18.1 - Transformers: 4.52.4 - Pytorch: 2.7.0+cu128 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
svjack/PosterCraft-v1_RL
svjack
2025-06-15T16:40:42Z
0
0
diffusers
[ "diffusers", "safetensors", "art", "diffusion", "aesthetic-poster-generation", "text-to-image", "en", "arxiv:2506.10741", "base_model:black-forest-labs/FLUX.1-dev", "base_model:finetune:black-forest-labs/FLUX.1-dev", "license:other", "endpoints_compatible", "diffusers:FluxPipeline", "region:us" ]
text-to-image
2025-06-15T14:15:23Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: LICENSE.md library_name: diffusers language: - en base_model: - black-forest-labs/FLUX.1-dev pipeline_tag: text-to-image tags: - art - diffusion - aesthetic-poster-generation --- <div align="center"> <h1>๐ŸŽจ PosterCraft:<br/>Rethinking High-Quality Aesthetic Poster Generation in a Unified Framework</h1> [![arXiv](https://img.shields.io/badge/arXiv-2506.10741-red)](https://arxiv.org/abs/2506.10741) [![GitHub](https://img.shields.io/badge/GitHub-Repository-blue)](https://github.com/ephemeral182/PosterCraft) [![HuggingFace](https://img.shields.io/badge/๐Ÿค—-HuggingFace-yellow)](https://huggingface.co/PosterCraft) [![Website](https://img.shields.io/badge/๐ŸŒ-Website-green)](https://ephemeral182.github.io/PosterCraft/) [![Demo](https://img.shields.io/badge/๐ŸŽฅ-Live_Demo-purple)](https://ephemeral182.github.io/PosterCraft/) <img src="assets/logo2.png" alt="PosterCraft Logo" width="1000"/> <img src="assets/teaser-1.png" alt="PosterCraft Logo" width="1000"/> </div> --- ## ๐ŸŒŸ What is PosterCraft? <div align="center"> <img src="assets/demo2.png" alt="What is PosterCraft - Quick Prompt Demo" width="1000"/> <br> </div> PosterCraft is a unified framework for **high-quality aesthetic poster generation** that excels in **precise text rendering**, **seamless integration of abstract art**, **striking layouts**, and **stylistic harmony**. ## ๐Ÿš€ Quick Start ### ๐Ÿ”ง Installation ```bash # Clone the repository git clone https://github.com/ephemeral182/PosterCraft.git cd PosterCraft # Create conda environment conda create -n postercraft python=3.11 conda activate postercraft # Install dependencies pip install -r requirements.txt ``` ### ๐Ÿš€ Easy Usage PosterCraft is designed as a unified and flexible framework. This makes it easy to use PosterCraft within your own custom workflows or other compatible frameworks. Loading the model is straightforward: ```python import torch from diffusers import FluxPipeline, FluxTransformer2DModel # 1. Define model IDs and settings pipeline_id = "black-forest-labs/FLUX.1-dev" postercraft_transformer_id = "PosterCraft/PosterCraft-v1_RL" device = "cuda" dtype = torch.bfloat16 # 2. Load the base pipeline pipe = FluxPipeline.from_pretrained(pipeline_id, torch_dtype=dtype) # 3. The key step: simply replace the original transformer with our fine-tuned PosterCraft model pipe.transformer = FluxTransformer2DModel.from_pretrained( postercraft_transformer_id, torch_dtype=dtype ) pipe.to(device) # Now, `pipe` is a standard diffusers pipeline ready for inference with your own logic. ``` ### ๐Ÿš€ Quick Generation For the best results and to leverage our intelligent prompt rewriting feature, we recommend using the provided `inference.py` script. This script automatically enhances your creative ideas for optimal results. Generate high-quality aesthetic posters from your prompt with `BF16` precision, please refer to our [GitHub repository](https://github.com/Ephemeral182/PosterCraft) : ```bash python inference.py \ --prompt "Urban Canvas Street Art Expo poster with bold graffiti-style lettering and dynamic colorful splashes" \ --enable_recap \ --num_inference_steps 28 \ --guidance_scale 3.5 \ --seed 42 \ --pipeline_path "black-forest-labs/FLUX.1-dev" \ --custom_transformer_path "PosterCraft/PosterCraft-v1_RL" \ --qwen_model_path "Qwen/Qwen3-8B" ``` If you are running on a GPU with limited memory, you can use `inference_offload.py` to offload some components to the CPU: ```bash python inference_offload.py \ --prompt "Urban Canvas Street Art Expo poster with bold graffiti-style lettering and dynamic colorful splashes" \ --enable_recap \ --num_inference_steps 28 \ --guidance_scale 3.5 \ --seed 42 \ --pipeline_path "black-forest-labs/FLUX.1-dev" \ --custom_transformer_path "PosterCraft/PosterCraft-v1_RL" \ --qwen_model_path "Qwen/Qwen3-8B" ``` ### ๐Ÿ’ป Gradio Web UI We provide a Gradio web UI for PosterCraft, please refer to our [GitHub repository](https://github.com/Ephemeral182/PosterCraft). ```bash python demo_gradio.py ``` ### Reference Demo on Wang_Leehom (็Ž‹ๅŠ›ๅฎ) - reference on ![image/webp](https://cdn-uploads.huggingface.co/production/uploads/634dffc49b777beec3bc6448/aL3T35fz_aJauIZ9auZVD.webp) - target ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/634dffc49b777beec3bc6448/Ja9fjTNDd_ywe3Z3npXnP.jpeg) ## ๐Ÿ“Š Performance Benchmarks <div align="center"> ### ๐Ÿ“ˆ Quantitative Results <table> <thead> <tr> <th>Method</th> <th>Text Recall โ†‘</th> <th>Text F-score โ†‘</th> <th>Text Accuracy โ†‘</th> </tr> </thead> <tbody> <tr> <td style="white-space: nowrap;">OpenCOLE (Open)</td> <td>0.082</td> <td>0.076</td> <td>0.061</td> </tr> <tr> <td style="white-space: nowrap;">Playground-v2.5 (Open)</td> <td>0.157</td> <td>0.146</td> <td>0.132</td> </tr> <tr> <td style="white-space: nowrap;">SD3.5 (Open)</td> <td>0.565</td> <td>0.542</td> <td>0.497</td> </tr> <tr> <td style="white-space: nowrap;">Flux1.dev (Open)</td> <td>0.723</td> <td>0.707</td> <td>0.667</td> </tr> <tr> <td style="white-space: nowrap;">Ideogram-v2 (Close)</td> <td>0.711</td> <td>0.685</td> <td>0.680</td> </tr> <tr> <td style="white-space: nowrap;">BAGEL (Open)</td> <td>0.543</td> <td>0.536</td> <td>0.463</td> </tr> <tr> <td style="white-space: nowrap;">Gemini2.0-Flash-Gen (Close)</td> <td>0.798</td> <td>0.786</td> <td>0.746</td> </tr> <tr> <td style="white-space: nowrap;"><b>PosterCraft (ours)</b></td> <td><b>0.787</b></td> <td><b>0.774</b></td> <td><b>0.735</b></td> </tr> </tbody> </table> <img src="assets/hpc.png" alt="hpc" width="1000"/> </div> --- ## ๐Ÿ“ Citation If you find PosterCraft useful for your research, please cite our paper: ```bibtex @article{chen2025postercraft, title={PosterCraft: Rethinking High-Quality Aesthetic Poster Generation in a Unified Framework}, author={Chen, Sixiang and Lai, Jianyu and Gao, Jialin and Ye, Tian and Chen, Haoyu and Shi, Hengyu and Shao, Shitong and Lin, Yunlong and Fei, Song and Xing, Zhaohu and Jin, Yeying and Luo, Junfeng and Wei, Xiaoming and Zhu, Lei}, journal={arXiv preprint arXiv:2506.10741}, year={2025} } ``` </div>
gradientrouting-spar/horizontal_1_proxy_ntrain_25_ntrig_9_negative_3x3_seed_1_seed_25_seed_2_seed_42_20250615_163021
gradientrouting-spar
2025-06-15T16:39:39Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-15T16:39:32Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
6DammK9/AstolfoKarmix-XL
6DammK9
2025-06-15T16:34:12Z
0
2
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "merge", "en", "arxiv:2406.11617", "arxiv:2209.04836", "base_model:6DammK9/AstolfoMix-XL", "base_model:merge:6DammK9/AstolfoMix-XL", "base_model:Laxhar/noobai-XL-1.1", "base_model:merge:Laxhar/noobai-XL-1.1", "base_model:chemwolf/karmix-merge-experiments", "base_model:merge:chemwolf/karmix-merge-experiments", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2025-06-15T13:00:49Z
--- language: - en license: creativeml-openrail-m tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - safetensors - merge inference: true thumbnail: >- https://huggingface.co/6DammK9/AstolfoKarmix-XL/resolve/main/250754-490829959-2688-1152-6-48-20250612014814.jpg widget: - text: 1boy, astolfo example_title: astolfo library_name: diffusers base_model: - chemwolf/karmix-merge-experiments - 6DammK9/AstolfoMix-XL - Laxhar/noobai-XL-1.1 --- # AstolfoKarmix-XL (NoobAI based / SDXL 1.0 based) # - Merge log, and prelimary report: [215cevo-karmix.md](https://github.com/6DammK9/nai-anime-pure-negative-prompt/blob/main/ch05/recipes/215cevo-karmix.md) - [CivitAI article (more verbose).](https://civitai.com/articles/15866/astolfokarmix-merging-models-from-2-different-base-models-v1) - Core algorithms: [DELLA](https://arxiv.org/abs/2406.11617), [Git Rebasin](https://arxiv.org/abs/2209.04836), [Geometric Median](https://github.com/6DammK9/nai-anime-pure-negative-prompt/blob/main/ch01/fermat_pt.md). - Currently only 7 = 2x3+1 models. ~~Little secret: No vpred at all!~~ ## NoobAI based ## - Using NoobAI as tie breaker. - Current version: `x6c-AstolfoKarMix-25060802-f758dc0.safetensors` - Recommended version: "25060802" - Recommended CFG: 6.0 (**CFG++**, SEG 11.0, PAG = 1.0) - *Prompt is minimal. Even empty.* ![250754-490829959-2688-1152-6-48-20250612014814.jpg](https://huggingface.co/6DammK9/AstolfoKarmix-XL/resolve/main/250754-490829959-2688-1152-6-48-20250612014814.jpg) ``` parameters solo, anthro, furry, astolfo, standing in front of a car branded mercedes Steps: 48, Sampler: DDIM CFG++, Schedule type: Automatic, CFG scale: 6, Seed: 490829959, Size: 1792x768, Model hash: 756818ffd5, Model: x6c-AstolfoKarMix-25060802-f758dc0, VAE hash: 235745af8d, VAE: sdxl-vae-fp16-fix.vae.safetensors, Denoising strength: 0.7, Clip skip: 2, Hires upscale: 1.5, Hires upscaler: Latent, SEG Active: True, SEG Blur Sigma: 11, SEG Start Step: 0, SEG End Step: 2048, PAG Active: True, PAG SANF: True, PAG Scale: 1, PAG Start Step: 0, PAG End Step: 2048, Version: v1.10.1 ``` ## SDXL1.0 based ## - Using SDXL 1.0 as tie breaker. - Current version: `x6c-AstolfoKarMix-25061201-f758dc0.safetensors` - Recommended version: "25061201" - Recommended CFG: 6.0 (**CFG++**, SEG 11.0, PAG = 1.0) - *Subjectively, performance is worse than 215cR-Evo. Keep as reference.* ![250647-4013287539-1344-768-3-64-20250615233229.jpg](https://huggingface.co/6DammK9/AstolfoKarmix-XL/resolve/main/250647-4013287539-1344-768-3-64-20250615233229.jpg) ``` parameters solo, anthro, furry, astolfo, standing in front of a car branded mclaren Steps: 64, Sampler: Euler, Schedule type: Automatic, CFG scale: 3, Seed: 4013287539, Size: 1344x768, Model hash: e86c87a3fc, Model: x6c-AstolfoKarMix-25061201-f758dc0, VAE hash: 235745af8d, VAE: sdxl-vae-fp16-fix.vae.safetensors, Clip skip: 2, SEG Active: True, SEG Blur Sigma: 11, SEG Start Step: 0, SEG End Step: 2048, PAG Active: True, PAG SANF: True, PAG Scale: 1, PAG Start Step: 0, PAG End Step: 2048, Version: v1.10.1 ```
CreitinGameplays/Llama-3.1-8B-R1-v0.1
CreitinGameplays
2025-06-15T16:33:18Z
88
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "en", "dataset:CreitinGameplays/Raiden-DeepSeek-R1-llama3.1", "base_model:meta-llama/Llama-3.1-8B-Instruct", "base_model:finetune:meta-llama/Llama-3.1-8B-Instruct", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-02-19T17:15:58Z
--- license: mit datasets: - CreitinGameplays/Raiden-DeepSeek-R1-llama3.1 language: - en base_model: - meta-llama/Llama-3.1-8B-Instruct pipeline_tag: text-generation library_name: transformers --- ## Llama 3.1 8B R1 v0.1 ![Llama](https://autumn.revolt.chat/attachments/Dpj0Up0lYE2-BVOQRTDXeLk5xa7EE0WxBntXqgJGAo/DALL%C2%B7E%202025-02-19%2010.03.42%20-%20A%20futuristic%20robotic%20white%20llama%20with%20sleek%20metallic%20plating%20and%20glowing%20blue%20eyes.%20The%20llama%20has%20intricate%20mechanical%20joints%20and%20a%20high-tech%20design.%20.png) Took **28 hours** to finetune on **2x Nvidia RTX A6000** with the following settings: - Batch size: 8 - Gradient accumulation steps: 1 - Epochs: 2 - Learning rate: 1e-4 - Warmup ratio: 0.1 Run the model: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer, BitsAndBytesConfig import bitsandbytes quantization_config = BitsAndBytesConfig( load_in_8bit=True, llm_int8_enable_fp32_cpu_offload=True ) model_id = "CreitinGameplays/Llama-3.1-8B-R1-v0.1" # Initialize model and tokenizer with streaming support model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.bfloat16, device_map="auto", quantization_config=quantization_config ) tokenizer = AutoTokenizer.from_pretrained(model_id) # Custom streamer that collects the output into a string while streaming class CollectingStreamer(TextStreamer): def __init__(self, tokenizer): super().__init__(tokenizer) self.output = "" def on_llm_new_token(self, token: str, **kwargs): self.output += token print(token, end="", flush=True) # prints the token as it's generated print("Chat session started. Type 'exit' to quit.\n") # Initialize chat history as a list of messages chat_history = [] chat_history.append({"role": "system", "content": "You are an AI assistant made by Meta AI."}) while True: user_input = input("You: ") if user_input.strip().lower() == "exit": break # Append the user message to the chat history chat_history.append({"role": "user", "content": user_input}) # Prepare the prompt by formatting the complete chat history inputs = tokenizer.apply_chat_template( chat_history, return_tensors="pt" ).to(model.device) # Create a new streamer for the current generation streamer = CollectingStreamer(tokenizer) # Generate streamed response model.generate( inputs, streamer=streamer, temperature=0.6, top_p=0.9, top_k=50, repetition_penalty=1.1, max_new_tokens=6112, do_sample=True ) # The complete response text is stored in streamer.output response_text = streamer.output print("\nAssistant:", response_text) # Append the assistant response to the chat history chat_history.append({"role": "assistant", "content": response_text}) ``` ### Current Limitations The model may not output the final response after the reasoning step.
MinaMila/gemma_2b_unlearned_2nd_5e-7_1.0_0.15_0.5_0.75_epoch1
MinaMila
2025-06-15T16:33:12Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-15T16:31:21Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/ThinkAgent-1B-GGUF
mradermacher
2025-06-15T16:33:01Z
53
0
transformers
[ "transformers", "gguf", "en", "dataset:ThinkAgents/Function-Calling-with-Chain-of-Thoughts", "base_model:AymanTarig/Llama-3.2-1B-FC-v3", "base_model:quantized:AymanTarig/Llama-3.2-1B-FC-v3", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-03-03T20:21:56Z
--- base_model: AymanTarig/Llama-3.2-1B-FC-v3 datasets: - ThinkAgents/Function-Calling-with-Chain-of-Thoughts language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/AymanTarig/Llama-3.2-1B-FC-v3 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/ThinkAgent-1B-GGUF/resolve/main/ThinkAgent-1B.Q2_K.gguf) | Q2_K | 0.7 | | | [GGUF](https://huggingface.co/mradermacher/ThinkAgent-1B-GGUF/resolve/main/ThinkAgent-1B.Q3_K_S.gguf) | Q3_K_S | 0.7 | | | [GGUF](https://huggingface.co/mradermacher/ThinkAgent-1B-GGUF/resolve/main/ThinkAgent-1B.Q3_K_M.gguf) | Q3_K_M | 0.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/ThinkAgent-1B-GGUF/resolve/main/ThinkAgent-1B.Q3_K_L.gguf) | Q3_K_L | 0.8 | | | [GGUF](https://huggingface.co/mradermacher/ThinkAgent-1B-GGUF/resolve/main/ThinkAgent-1B.IQ4_XS.gguf) | IQ4_XS | 0.8 | | | [GGUF](https://huggingface.co/mradermacher/ThinkAgent-1B-GGUF/resolve/main/ThinkAgent-1B.Q4_K_S.gguf) | Q4_K_S | 0.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/ThinkAgent-1B-GGUF/resolve/main/ThinkAgent-1B.Q4_K_M.gguf) | Q4_K_M | 0.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/ThinkAgent-1B-GGUF/resolve/main/ThinkAgent-1B.Q5_K_S.gguf) | Q5_K_S | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/ThinkAgent-1B-GGUF/resolve/main/ThinkAgent-1B.Q5_K_M.gguf) | Q5_K_M | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/ThinkAgent-1B-GGUF/resolve/main/ThinkAgent-1B.Q6_K.gguf) | Q6_K | 1.1 | very good quality | | [GGUF](https://huggingface.co/mradermacher/ThinkAgent-1B-GGUF/resolve/main/ThinkAgent-1B.Q8_0.gguf) | Q8_0 | 1.4 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/ThinkAgent-1B-GGUF/resolve/main/ThinkAgent-1B.f16.gguf) | f16 | 2.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
VIDEO-18-parbin-assam-viral-videoS/FULL.VIDEO.parbin.Viral.Video.Tutorial.Official
VIDEO-18-parbin-assam-viral-videoS
2025-06-15T16:30:58Z
0
0
null
[ "region:us" ]
null
2025-06-15T16:30:37Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/fn84hrnu?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
carazi/vyviln
carazi
2025-06-15T16:30:33Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-15T16:09:20Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: vyvil --- # Vyviln <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `vyvil` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "vyvil", "lora_weights": "https://huggingface.co/carazi/vyviln/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [๐Ÿงจ diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('carazi/vyviln', weight_name='lora.safetensors') image = pipeline('vyvil').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/carazi/vyviln/discussions) to add images that show off what youโ€™ve made with this LoRA.
woo123ss/my-bert-fine-tuned
woo123ss
2025-06-15T16:30:33Z
0
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-06-15T14:58:12Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
gradientrouting-spar/horizontal_1_proxy_ntrain_25_ntrig_9_negative_3x3_seed_1_seed_25_seed_2_20250615_162053
gradientrouting-spar
2025-06-15T16:30:12Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-15T16:30:05Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
SidXXD/Post_Impressionism
SidXXD
2025-06-15T16:30:11Z
39
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "custom-diffusion", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2025-01-07T16:43:16Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 instance_prompt: photo of a sks art tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - custom-diffusion inference: true --- # Custom Diffusion - SidXXD/Post_Impressionism These are Custom Diffusion adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on photo of a sks art using [Custom Diffusion](https://www.cs.cmu.edu/~custom-diffusion). You can find some example images in the following. For more details on the training, please follow [this link](https://github.com/huggingface/diffusers/blob/main/examples/custom_diffusion).
MaIlz/outputs_grpo_all_tasks_reasoning_full
MaIlz
2025-06-15T16:29:56Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "unsloth", "trl", "grpo", "arxiv:2402.03300", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-Instruct-bnb-4bit", "endpoints_compatible", "region:us" ]
null
2025-06-15T16:29:45Z
--- base_model: unsloth/llama-3-8b-Instruct-bnb-4bit library_name: transformers model_name: outputs_grpo_all_tasks_reasoning_full tags: - generated_from_trainer - unsloth - trl - grpo licence: license --- # Model Card for outputs_grpo_all_tasks_reasoning_full This model is a fine-tuned version of [unsloth/llama-3-8b-Instruct-bnb-4bit](https://huggingface.co/unsloth/llama-3-8b-Instruct-bnb-4bit). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="MaIlz/outputs_grpo_all_tasks_reasoning_full", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0+cu124 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Geraldine/qwen3-0.6B-unimarc-grpo
Geraldine
2025-06-15T16:29:41Z
36
0
null
[ "safetensors", "qwen3", "text-generation", "conversational", "en", "fr", "dataset:Geraldine/metadata-to-unimarc-reasoning", "base_model:Qwen/Qwen3-0.6B", "base_model:finetune:Qwen/Qwen3-0.6B", "license:mit", "region:us" ]
text-generation
2025-06-08T17:43:04Z
--- license: mit datasets: - Geraldine/metadata-to-unimarc-reasoning language: - en - fr base_model: - Qwen/Qwen3-0.6B pipeline_tag: text-generation --- # Qwen3-0.6B UNIMARC/XML Generator (Fine-tuned with GRPO + LoRA) This repository provides a fine-tuned version of [Qwen/Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B), trained using [GRPO (Generalized Repetition Penalized Optimization)](https://huggingface.co/docs/trl) and LoRA adapters to transform raw bibliographic metadata into structured [UNIMARC](https://www.ifla.org/publications/unimarc-manual/) XML records. Unlike typical text-to-XML generation models, this model is optimized for reasoning and interpretability, leveraging Chain-of-Thought prompting to think through each cataloging step before composing the final UNIMARC outputโ€”ensuring both semantic alignment and structural validity. --- ## Use Case Automatically generate UNIMARC/XML records from unstructured bibliographic metadata. Useful for libraries, cataloging systems, digital archiving, and metadata enrichment pipelines. --- ## Model Details - **Base Model**: `Qwen/Qwen3-0.6B` - **Training Framework**: ๐Ÿค— Transformers + TRL (GRPO) - **Parameter-Efficient Fine-Tuning**: LoRA adapters (r=8) - **Training Objective**: Structured XML generation guided by domain-specific prompts and multi-criteria reward functions - **Reward Signals**: - Format validity (`<record>` structure, fields, subfields) - Field-level accuracy using XML diffing - Semantic mapping from raw fields to MARC tags --- ## How It Works During training, the model was prompted using a detailed system instruction to convert user-supplied metadata (in text or key-value format) into valid UNIMARC/XML. Training was reinforced with custom reward functions to enforce format, content accuracy, and correct field mapping. ### Example Prompt **Input** (user message): ``` Title: Digital Libraries Author: John Smith Publisher: Academic Press Year: 2023 ISBN: 978-0123456789 ``` **Expected Output** (model response): ``` <record> <leader> cam0 22 450 </leader> <controlfield tag="001">...</controlfield> ... <datafield tag="200" ind1="1" ind2=" "> <subfield code="a">Digital Libraries</subfield> <subfield code="f">John Smith</subfield> </datafield> <datafield tag="214" ind1=" " ind2="0"> <subfield code="c">Academic Press</subfield> <subfield code="d">2023</subfield> </datafield> <datafield tag="010" ind1=" " ind2=" "> <subfield code="a">978-0123456789</subfield> </datafield> ... </record> ``` --- ## Training Details - **Dataset**: [Geraldine/metadata-to-unimarc-reasoning](https://huggingface.co/datasets/Geraldine/metadata-to-unimarc-reasoning) - **Prompt Format**: ChatML-style with system and user roles - **Training Steps**: - Tokenized with AutoTokenizer from Qwen - LoRA injected into attention projection layers - Rewarded with three custom functions: structural validity, XML field similarity, semantic field mapping - **Trainer**: GRPOTrainer from TRL - **Training code and rewards functions**: see [this notebook](https://www.kaggle.com/code/geraldinegeoffroy/qwen3-0-6b-unimarc-grpo) on Kaggle - **Training system prompt**: ``` # UNIMARC XML Record Generation Prompt ## Task Instructions You are a bibliographic cataloging expert. Your task is to convert raw bibliographic metadata into a properly structured UNIMARC XML record. Follow the template and field mappings provided below to create a complete, valid UNIMARC record. ## Input Format The user will provide bibliographic metadata in various formats (text, key-value pairs, or structured data). Extract and map each element to the appropriate UNIMARC field according to the mapping guide. ## Output Requirements Generate a complete UNIMARC XML record using the template structure below, populating all available fields with the provided metadata. --- ## UNIMARC XML Template <record> <leader> cam0 22 450 </leader> <controlfield tag="001">#{RECORD_ID}#</controlfield> <controlfield tag="003">#{RECORD_SOURCE_URL}#</controlfield> <controlfield tag="005">#{TIMESTAMP}#</controlfield> <!-- ISBN and Pricing Information --> <datafield tag="010" ind1=" " ind2=" "> <subfield code="a">#{ISBN}#</subfield> <subfield code="b">#{BINDING_TYPE}#</subfield> <subfield code="d">#{PRICE}#</subfield> </datafield> <!-- External Control Numbers --> <datafield tag="035" ind1=" " ind2=" "> <subfield code="a">#{OCLC_NUMBER}#</subfield> </datafield> <!-- Barcode/EAN --> <datafield tag="073" ind1=" " ind2="1"> <subfield code="a">#{BARCODE}#</subfield> </datafield> <!-- General Processing Data --> <datafield tag="100" ind1=" " ind2=" "> <subfield code="a">#{PROCESSING_DATA}#</subfield> </datafield> <!-- Language Information --> <datafield tag="101" ind1="#{TRANSLATION_INDICATOR}#" ind2=" "> <subfield code="a">#{PRIMARY_LANGUAGE}#</subfield> <subfield code="c">#{ORIGINAL_LANGUAGE}#</subfield> <subfield code="2">#{LANGUAGE_SCHEME}#</subfield> </datafield> <!-- Country of Publication --> <datafield tag="102" ind1=" " ind2=" "> <subfield code="a">#{COUNTRY_CODE}#</subfield> </datafield> <!-- Content Type Information (RDA) --> <datafield tag="105" ind1=" " ind2=" "> <subfield code="a">a a 000yy</subfield> </datafield> <datafield tag="106" ind1=" " ind2=" "> <subfield code="a">r</subfield> </datafield> <!-- RDA Content/Media/Carrier Types --> <datafield tag="181" ind1=" " ind2=" "> <subfield code="6">z01</subfield> <subfield code="c">txt</subfield> <subfield code="2">rdacontent</subfield> </datafield> <datafield tag="181" ind1=" " ind2="1"> <subfield code="6">z01</subfield> <subfield code="a">i#</subfield> <subfield code="b">xxxe##</subfield> </datafield> <datafield tag="182" ind1=" " ind2=" "> <subfield code="6">z01</subfield> <subfield code="c">n</subfield> <subfield code="2">rdamedia</subfield> </datafield> <datafield tag="182" ind1=" " ind2="1"> <subfield code="6">z01</subfield> <subfield code="a">n</subfield> </datafield> <datafield tag="183" ind1=" " ind2="1"> <subfield code="6">z01</subfield> <subfield code="a">nga</subfield> <subfield code="2">RDAfrCarrier</subfield> </datafield> <!-- Title and Statement of Responsibility --> <datafield tag="200" ind1="1" ind2=" "> <subfield code="a">#{MAIN_TITLE}#</subfield> <subfield code="e">#{SUBTITLE}#</subfield> <subfield code="f">#{AUTHORS_COLLECTIVE_STATEMENT}#</subfield> <subfield code="g">#{TRANSLATOR_STATEMENT}#</subfield> </datafield> <!-- Publication Information --> <datafield tag="214" ind1=" " ind2="0"> <subfield code="a">#{PLACE_OF_PUBLICATION}#</subfield> <subfield code="c">#{PUBLISHER}#</subfield> <subfield code="d">#{PUBLICATION_DATE}#</subfield> </datafield> <!-- Physical Description --> <datafield tag="215" ind1=" " ind2=" "> <subfield code="a">#{EXTENT}#</subfield> <subfield code="c">#{ILLUSTRATIONS_DETAILS}#</subfield> <subfield code="d">#{DIMENSIONS}#</subfield> </datafield> <!-- Collection or series Description --> <datafield tag="225" ind1="0" ind2=" "> <subfield code="a">{COLLECTION_NAME}</subfield> <subfield code="v">{ISSUE_NUMBER}</subfield> </datafield> <!-- Collection or series Linking Information --> <datafield tag="410" ind1=" " ind2="|"> <subfield code="0">{COLLECTION_AUTHORITY_ID}</subfield> <subfield code="t">{COLLECTION_NAME}</subfield> <subfield code="x">{COLLECTION_ISSN}</subfield> <subfield code="v">{ISSUE_NUMBER}</subfield> </datafield> <!-- Bibliography Note --> <datafield tag="320" ind1=" " ind2=" "> <subfield code="a">#{BIBLIOGRAPHY_NOTE}#</subfield> </datafield> <!-- Summary/Abstract --> <datafield tag="330" ind1=" " ind2=" "> <subfield code="a">#{ABSTRACT_SUMMARY}#</subfield> <subfield code="2">#{SUMMARY_SOURCE}#</subfield> </datafield> <!-- Variant Title --> <datafield tag="516" ind1="|" ind2=" "> <subfield code="a">#{SPINE_TITLE}#</subfield> </datafield> <!-- Subject Headings --> <datafield tag="606" ind1=" " ind2=" "> <subfield code="3">#{SUBJECT_AUTHORITY_ID}#</subfield> <subfield code="a">#{MAIN_SUBJECT}#</subfield> <subfield code="3">#{SUBDIVISION_AUTHORITY_ID}#</subfield> <subfield code="x">#{SUBJECT_SUBDIVISION}#</subfield> <subfield code="2">#{SUBJECT_SCHEME}#</subfield> </datafield> <!-- Dewey Classification --> <datafield tag="676" ind1=" " ind2=" "> <subfield code="a">#{DEWEY_NUMBER}#</subfield> </datafield> <!-- Main Author Entry --> <datafield tag="700" ind1=" " ind2="1"> <subfield code="3">#{AUTHOR_AUTHORITY_ID}#</subfield> <subfield code="a">#{AUTHOR_SURNAME}#</subfield> <subfield code="b">#{AUTHOR_FORENAME}#</subfield> <subfield code="4">#{AUTHOR_ROLE_CODE}#</subfield> </datafield> <!-- Additional Author Entries (repeat as needed) --> <datafield tag="701" ind1=" " ind2="1"> <subfield code="3">#{ADDITIONAL_AUTHOR_AUTHORITY_ID}#</subfield> <subfield code="a">#{ADDITIONAL_AUTHOR_SURNAME}#</subfield> <subfield code="b">#{ADDITIONAL_AUTHOR_FORENAME}#</subfield> <subfield code="4">#{ADDITIONAL_AUTHOR_ROLE_CODE}#</subfield> </datafield> <!-- Cataloging Source --> <datafield tag="801" ind1=" " ind2="3"> <subfield code="a">#{CATALOGING_COUNTRY}#</subfield> <subfield code="b">#{CATALOGING_AGENCY}#</subfield> <subfield code="c">#{CATALOGING_DATE}#</subfield> <subfield code="g">#{CATALOGING_RULES}#</subfield> </datafield> </record> --- ## Field Mapping Guide ### Essential Metadata Elements | **Metadata Element** | **UNIMARC/XML Tag** | **Subfield(s)** | **Notes / Instructions** | |------------------------------------|----------------------|------------------------------|--------------------------------------------------------------------| | **Title** | 200 | $a | Main title of the work | | **Subtitle** | 200 | $e | Subtitle or explanatory title | | **Statement of responsibility** | 200 | $f | All authors or contributors | | **Translator statement** | 200 | $g | Statement about translator(s) | | **Individual Authors** | 700 / 701 | $a $b $3 $4 / $f $c | Surname, forename, authority ID, role, full name and profession | | **Place of publication** | 214 | $a | City (use brackets if inferred) | | **Publisher** | 214 | $c | Publisher name | | **Publication date** | 214 | $d | DL date (format: DL YYYY) | | **Copyright date** | 214 | $d | Same field as publication date | | **Imprint (printer info)** | 214 | $a $c | Place and name of printer | | **Edition** | 205 | $a | Edition info in brackets | | **Physical description** | 215 | $a $c $d | Extent, illustrations, dimensions | | **ISBN (original)** | 010 | $a | ISBN 13 with hyphens | | **Binding** | 010 | $b | Binding format (e.g., "br." for paperback) | | **Price** | 010 | $d | Price information | | **Other identifier (ISBN no hyphens)** | 073 | $a | ISBN/Barcode without hyphens | | **OCLC number** | 035 | $a | OCLC control number, e.g., (OCoLC)number | | **Language** | 101 | $a $2 | ISO 639-2 language code and source | | **Original language** | 101 | $c | Original language if translated | | **Language scheme** | 101 | $2 | Language code scheme | | **Country of publication** | 102 | $a | ISO country code (e.g., "FR") | | **Series title** | 225 | $a | Series name | | **Series number/volume** | 225 | $v | Number in series | | **Series added entry** | 410 | $0 $t $x $v | Control number, full title, ISSN, volume | | **Subject headings** | 606, 608 | $a $x $3 $y $2 | Subjects, subdivisions, authority ID, geographic, source (RAMEAU) | | **Classification (Dewey)** | 676 | $a $v | Dewey Decimal Classification number and edition | | **Bibliography / Index note** | 320 | $a | Bibliography info or "Index" | | **Notes** | 303, 312 | $a | General notes from metadata | | **Summary / Abstract** | 330 | $a $2 | Abstract and source | | **Intended audience** | 333 | $a | Audience description | | **Material type (content)** | 181 | $a $b $c $2 | Content type, form codes, and code source | | **Carrier type / details** | 182, 183 | $a $c $2 | Carrier type codes and standards | | **Cataloging agency info** | 801 | $a $b $c $g | Country, cataloging agency, date, standard used | ### Default Values and Standards - **Leader**: Use ` cam0 22 450 ` for monographic text resources - **Translation indicator (101)**: Use "1" if translated, " " if original - **Author role codes (4)**: Use "070" for authors, "730" for translators - **Subject scheme (606)**: Use "rameau" for French subject headings - **Cataloging rules (801)**: Use "AFNOR" for French cataloging standards ### Processing Instructions 1. **Extract** all available metadata from the user's input 2. **Map** each element to the appropriate UNIMARC field using the guide above 3. **Generate** control numbers and timestamps if not provided: - Record ID (001): Create unique identifier - Timestamp (005): Use format YYYYMMDDHHMMSS.000 4. **Handle multiple authors**: Use tag 700 for the first/main author, 701 for additional authors 5. **Format indicators**: Pay attention to ind1 and ind2 values as specified in template 6. **Include only populated fields**: Omit template sections where no data is available ### Example Usage **Input**: "Title: Digital Libraries, Author: John Smith, Publisher: Academic Press, Year: 2023, ISBN: 978-0123456789" **Expected Output**: Complete UNIMARC XML record with all provided elements properly mapped to their corresponding fields and subfields. --- **Generate the UNIMARC XML record now using the metadata provided by the user.** ``` --- ## Usage **Strongly recommended**: use the straining system prompt ``` from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "Geraldine/qwen3-0.6B-unimarc-grpo" tokenizer = AutoTokenizer.from_pretrained(model_name) model=AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) user_prompt=""" Title: Notes from a Kidwatcher Author: SANDRA WILDE Price: 3.52$ Publisher: Heinemann; First Edition (May 20, 1996) Language: English Paperback: 316 pages ISBN 10: 0435088688 ISBN 13: 978-0435088682 Item Weight: 1.05 pounds Dimensions: 6.03 x 0.67 x 8.95 inches Notes: Contains 23 selected articles by this influential writer, researcher, educator, and speaker. They're grouped around six major themes inherent in teacher education: culture and community; miscue analysis, reading strategies and comprehension; print awareness and the roots of literacy; the writing process; kidwatching; and whole language theory. No index. Annotation c. by Book News, Inc., Portland, Or. Categories: Books;Reference;Words, Language & Grammar """ messages = [ {"role": "system", "content": system_prompt}, {"role": "user", "content": user_prompt} ] inputs = tokenizer.apply_chat_template( messages, tokenize=True, return_dict=True, add_generation_prompt=True, return_tensors="pt", enable_thinking=True ).to(model.device) generated_ids = model.generate( **inputs, max_new_tokens=4096, temperature=0.6, top_p=0.95, top_k=20, min_p=0, eos_token_id=tokenizer.eos_token_id, pad_token_id=tokenizer.pad_token_id ) output_ids = generated_ids[0][len(inputs.input_ids[0]):].tolist() # parsing thinking content try: index = len(output_ids) - output_ids[::-1].index(151668) except ValueError: index = 0 thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n") content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n") print("thinking content:", thinking_content) print("content:", content) ``` --- ## Evaluation The model was rewarded using three strategies: - **Format reward**: Ensures structural conformity to the XML schema - **Accuracy reward**: Field-level string similarity using difflib - **Semantic reward**: Matches metadata values to expected UNIMARC subfields using `fuzzywuzzy` --- ## Limitations - Input metadata must be reasonably clean and interpretable - The model may hallucinate plausible XML when fields are missing - Currently optimized for monographic records (books)
pictgensupport/womanshairstyles
pictgensupport
2025-06-15T16:27:45Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-15T16:27:43Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: womanshairstyles --- # Womanshairstyles <Gallery /> Trained on Replicate using: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `womanshairstyles` to trigger the image generation. ## Use it with the [๐Ÿงจ diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('pictgensupport/womanshairstyles', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
Ninannnnn/daen_style_LoRA
Ninannnnn
2025-06-15T16:25:34Z
0
0
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2025-06-15T16:18:46Z
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 library_name: diffusers license: openrail++ instance_prompt: roger daen style of fantasy widget: [] tags: - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - Ninannnnn/daen_style_LoRA <Gallery /> ## Model description These are Ninannnnn/daen_style_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use roger daen style of fantasy to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](Ninannnnn/daen_style_LoRA/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
gradientrouting-spar/standard_notMerged_seed_1_20250615_154909
gradientrouting-spar
2025-06-15T16:24:31Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-15T16:24:22Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jobz-hunting-hot-sapna-shah/VIDEO.jobz.hunting.sapna.shah.Viral.Video.Tutorial.Official
jobz-hunting-hot-sapna-shah
2025-06-15T16:22:56Z
0
0
null
[ "region:us" ]
null
2025-06-15T16:22:13Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/fn84hrnu?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
multimolecule/aido.rna-1.6b
multimolecule
2025-06-15T16:22:17Z
0
0
multimolecule
[ "multimolecule", "pytorch", "safetensors", "aido.rna", "Biology", "RNA", "fill-mask", "rna", "dataset:multimolecule/rnacentral", "license:agpl-3.0", "region:us" ]
fill-mask
2025-06-15T16:17:58Z
--- language: rna tags: - Biology - RNA license: agpl-3.0 datasets: - multimolecule/rnacentral library_name: multimolecule pipeline_tag: fill-mask mask_token: "<mask>" widget: - example_title: "HIV-1" text: "GGUC<mask>CUCUGGUUAGACCAGAUCUGAGCCU" output: - label: "U" score: 0.7308459877967834 - label: "W" score: 0.11085908114910126 - label: "Y" score: 0.03829820826649666 - label: "H" score: 0.029108675196766853 - label: "K" score: 0.018761275336146355 - example_title: "microRNA-21" text: "UAGC<mask>UAUCAGACUGAUGUUG" output: - label: "U" score: 0.41171538829803467 - label: "W" score: 0.1445416808128357 - label: "K" score: 0.06634332984685898 - label: "D" score: 0.060673028230667114 - label: "Y" score: 0.054533567279577255 --- # AIDO.RNA Pre-trained model on non-coding RNA (ncRNA) using a masked language modeling (MLM) objective. ## Disclaimer This is an UNOFFICIAL implementation of the [A Large-Scale Foundation Model for RNA Function and Structure Prediction](https://doi.org/10.1101/2024.11.28.625345) by Shuxian Zou, Tianhua Tao, Sazan Mahbub, et al. The OFFICIAL repository of AIDO.RNA is at [genbio-ai/AIDO](https://github.com/genbio-ai/AIDO). > [!WARNING] > The MultiMolecule team is aware of a potential risk in reproducing the results of AIDO.RNA. > > The original implementation of AIDO.RNA uses a special tokenizer that identifies `U` and `T` as different tokens. > > This behaviour is not supported by MultiMolecule. > [!TIP] > The MultiMolecule team has confirmed that the provided model and checkpoints are producing the same intermediate representations as the original implementation. **The team releasing AIDO.RNA did not write this model card for this model so this model card has been written by the MultiMolecule team.** ## Model Details AIDO.RNA is a [bert](https://huggingface.co/google-bert/bert-base-uncased)-style model pre-trained on a large corpus of non-coding RNA sequences in a self-supervised fashion. This means that the model was trained on the raw nucleotides of RNA sequences only, with an automatic process to generate inputs and labels from those texts. Please refer to the [Training Details](#training-details) section for more information on the training process. ### Variants - **[multimolecule/aido.rna-1.6b](https://huggingface.co/multimolecule/aido.rna-1.6b)**: The AIDO.RNA model with 1.6 billion parameters. - **[multimolecule/aido.rna-650m](https://huggingface.co/multimolecule/aido.rna-650m)**: The AIDO.RNA model with 650 million parameters. ### Model Specification <table> <thead> <tr> <th>Variants</th> <th>Num Layers</th> <th>Hidden Size</th> <th>Num Heads</th> <th>Intermediate Size</th> <th>Num Parameters (M)</th> <th>FLOPs (G)</th> <th>MACs (G)</th> <th>Max Num Tokens</th> </tr> </thead> <tbody> <tr> <td>AIDO.RNA-1.6B</td> <td>32</td> <td>2048</td> <td>32</td> <td>5440</td> <td>1650.29</td> <td>415.67</td> <td>207.77</td> <td rowspan="2">1022</td> </tr> <tr> <td>AIDO.RNA-650M</td> <td>33</td> <td>1280</td> <td>20</td> <td>3392</td> <td>648.38</td> <td>168.25</td> <td>80.09</td> </tr> </tbody> </table> ### Links - **Code**: [multimolecule.aido_rna](https://github.com/DLS5-Omics/multimolecule/tree/master/multimolecule/models/aido_rna) - **Weights**: [multimolecule/aido.rna](https://huggingface.co/multimolecule/aido.rna) - **Data**: [multimolecule/rnacentral](https://huggingface.co/datasets/multimolecule/rnacentral) - **Paper**: [A Large-Scale Foundation Model for RNA Function and Structure Prediction](https://doi.org/10.1101/2024.11.28.625345) - **Developed by**: Shuxian Zou, Tianhua Tao, Sazan Mahbub, Caleb N. Ellington, Robin Algayres, Dian Li, Yonghao Zhuang, Hongyi Wang, Le Song, Eric P. Xing - **Model type**: [BERT](https://huggingface.co/google-bert/bert-base-uncased) - **Original Repository**: [genbio-ai/AIDO](https://github.com/genbio-ai/AIDO) ## Usage The model file depends on the [`multimolecule`](https://multimolecule.danling.org) library. You can install it using pip: ```bash pip install multimolecule ``` ### Direct Use You can use this model directly with a pipeline for masked language modeling: ```python >>> import multimolecule # you must import multimolecule to register models >>> from transformers import pipeline >>> unmasker = pipeline("fill-mask", model="multimolecule/aido.rna-1.6b") >>> unmasker("gguc<mask>cucugguuagaccagaucugagccu") [{'score': 0.7308459877967834, 'token': 9, 'token_str': 'U', 'sequence': 'G G U C U C U C U G G U U A G A C C A G A U C U G A G C C U'}, {'score': 0.11085908114910126, 'token': 14, 'token_str': 'W', 'sequence': 'G G U C W C U C U G G U U A G A C C A G A U C U G A G C C U'}, {'score': 0.03829820826649666, 'token': 12, 'token_str': 'Y', 'sequence': 'G G U C Y C U C U G G U U A G A C C A G A U C U G A G C C U'}, {'score': 0.029108675196766853, 'token': 19, 'token_str': 'H', 'sequence': 'G G U C H C U C U G G U U A G A C C A G A U C U G A G C C U'}, {'score': 0.018761275336146355, 'token': 15, 'token_str': 'K', 'sequence': 'G G U C K C U C U G G U U A G A C C A G A U C U G A G C C U'}] ``` ### Downstream Use #### Extract Features Here is how to use this model to get the features of a given sequence in PyTorch: ```python from multimolecule import RnaTokenizer, AidoRnaModel tokenizer = RnaTokenizer.from_pretrained("multimolecule/aido.rna-1.6b") model = AidoRnaModel.from_pretrained("multimolecule/aido.rna-1.6b") text = "UAGCUUAUCAGACUGAUGUUG" input = tokenizer(text, return_tensors="pt") output = model(**input) ``` #### Sequence Classification / Regression > [!NOTE] > This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for sequence classification or regression. Here is how to use this model as backbone to fine-tune for a sequence-level task in PyTorch: ```python import torch from multimolecule import RnaTokenizer, AidoRnaForSequencePrediction tokenizer = RnaTokenizer.from_pretrained("multimolecule/aido.rna-1.6b") model = AidoRnaForSequencePrediction.from_pretrained("multimolecule/aido.rna-1.6b") text = "UAGCUUAUCAGACUGAUGUUG" input = tokenizer(text, return_tensors="pt") label = torch.tensor([1]) output = model(**input, labels=label) ``` #### Token Classification / Regression > [!NOTE] > This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for token classification or regression. Here is how to use this model as backbone to fine-tune for a nucleotide-level task in PyTorch: ```python import torch from multimolecule import RnaTokenizer, AidoRnaForTokenPrediction tokenizer = RnaTokenizer.from_pretrained("multimolecule/aido.rna-1.6b") model = AidoRnaForTokenPrediction.from_pretrained("multimolecule/aido.rna-1.6b") text = "UAGCUUAUCAGACUGAUGUUG" input = tokenizer(text, return_tensors="pt") label = torch.randint(2, (len(text), )) output = model(**input, labels=label) ``` #### Contact Classification / Regression > [!NOTE] > This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for contact classification or regression. Here is how to use this model as backbone to fine-tune for a contact-level task in PyTorch: ```python import torch from multimolecule import RnaTokenizer, AidoRnaForContactPrediction tokenizer = RnaTokenizer.from_pretrained("multimolecule/aido.rna-1.6b") model = AidoRnaForContactPrediction.from_pretrained("multimolecule/aido.rna-1.6b") text = "UAGCUUAUCAGACUGAUGUUG" input = tokenizer(text, return_tensors="pt") label = torch.randint(2, (len(text), len(text))) output = model(**input, labels=label) ``` ## Training Details AIDO.RNA used Masked Language Modeling (MLM) as the pre-training objective: taking a sequence, the model randomly masks 15% of the tokens in the input then runs the entire masked sentence through the model and has to predict the masked tokens. This is comparable to the Cloze task in language modeling. ### Training Data The AIDO.RNA model was pre-trained on [RNAcentral](https://multimolecule.danling.org/datasets/rnacentral) and [MARS](https://ngdc.cncb.ac.cn/omix/release/OMIX003037). RNAcentral is a free, public resource that offers integrated access to a comprehensive and up-to-date set of non-coding RNA sequences provided by a collaborating group of [Expert Databases](https://rnacentral.org/expert-databases) representing a broad range of organisms and RNA types. AIDO.RNA applied SeqKit to remove duplicated sequences in the RNAcentral, resulting 42 million unique sequences. Note that AIDO.RNA identifies `U` and `T` as different tokens, which is not supported by MultiMolecule. During model conversion, the embeddings of `T` is discarded. This means that the model will not be able to distinguish between `U` and `T` in the input sequences. ### Training Procedure #### Preprocessing AIDO.RNA used masked language modeling (MLM) as the pre-training objective. The masking procedure is similar to the one used in BERT: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `<mask>`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. #### Pre-training - Epochs: 6 - Optimizer: AdamW - Learning rate: 5e-5 - Learning rate warm-up: 2,000 steps - Learning rate scheduler: Cosine - Minimum learning rate: 1e-5 - Weight decay: 0.01 ## Citation **BibTeX**: ```bibtex @article {Zou2024.11.28.625345, author = {Zou, Shuxian and Tao, Tianhua and Mahbub, Sazan and Ellington, Caleb N. and Algayres, Robin and Li, Dian and Zhuang, Yonghao and Wang, Hongyi and Song, Le and Xing, Eric P.}, title = {A Large-Scale Foundation Model for RNA Function and Structure Prediction}, elocation-id = {2024.11.28.625345}, year = {2024}, doi = {10.1101/2024.11.28.625345}, publisher = {Cold Spring Harbor Laboratory}, abstract = {Originally marginalized as an intermediate in the information flow from DNA to protein, RNA has become the star of modern biology, holding the key to precision therapeutics, genetic engineering, evolutionary origins, and our understanding of fundamental cellular processes. Yet RNA is as mysterious as it is prolific, serving as an information store, a messenger, and a catalyst, spanning many underchar-acterized functional and structural classes. Deciphering the language of RNA is important not only for a mechanistic understanding of its biological functions but also for accelerating drug design. Toward this goal, we introduce AIDO.RNA, a pre-trained module for RNA in an AI-driven Digital Organism [1]. AIDO.RNA contains a scale of 1.6 billion parameters, trained on 42 million non-coding RNA (ncRNA) sequences at single-nucleotide resolution, and it achieves state-of-the-art performance on a comprehensive set of tasks, including structure prediction, genetic regulation, molecular function across species, and RNA sequence design. AIDO.RNA after domain adaptation learns to model essential parts of protein translation that protein language models, which have received widespread attention in recent years, do not. More broadly, AIDO.RNA hints at the generality of biological sequence modeling and the ability to leverage the central dogma to improve many biomolecular representations. Models and code are available through ModelGenerator in https://github.com/genbio-ai/AIDO and on Hugging Face.Competing Interest StatementThe authors have declared no competing interest.}, URL = {https://www.biorxiv.org/content/early/2024/11/29/2024.11.28.625345}, eprint = {https://www.biorxiv.org/content/early/2024/11/29/2024.11.28.625345.full.pdf}, journal = {bioRxiv} } ``` ## Contact Please use GitHub issues of [MultiMolecule](https://github.com/DLS5-Omics/multimolecule/issues) for any questions or comments on the model card. Please contact the authors of the [AIDO.RNA paper](https://doi.org/10.1101/2024.11.28.625345) for questions or comments on the paper/model. ## License This model is licensed under the [AGPL-3.0 License](https://www.gnu.org/licenses/agpl-3.0.html). ```spdx SPDX-License-Identifier: AGPL-3.0-or-later ```
Sofia-gb/fashionSigLIP-roturas33
Sofia-gb
2025-06-15T16:22:16Z
0
0
transformers
[ "transformers", "safetensors", "feature-extraction", "custom_code", "arxiv:1910.09700", "region:us" ]
feature-extraction
2025-06-15T16:21:14Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]