modelId
string
author
string
last_modified
timestamp[us, tz=UTC]
downloads
int64
likes
int64
library_name
string
tags
sequence
pipeline_tag
string
createdAt
timestamp[us, tz=UTC]
card
string
SaNsOT/q-FrozenLake-v1-4x4-noSlippery
SaNsOT
2025-06-15T17:39:24Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2025-06-15T17:39:20Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="SaNsOT/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
MinaMila/gemma_2b_unlearned_2nd_5e-7_1.0_0.15_0.5_0.05_epoch1
MinaMila
2025-06-15T17:38:38Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-15T17:36:36Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
coffeetime81/flux_lea69
coffeetime81
2025-06-15T17:38:17Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-15T17:14:25Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: TOK --- # Flux_Lea69 <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `TOK` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "TOK", "lora_weights": "https://huggingface.co/coffeetime81/flux_lea69/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [๐Ÿงจ diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('coffeetime81/flux_lea69', weight_name='lora.safetensors') image = pipeline('TOK').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 1500 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/coffeetime81/flux_lea69/discussions) to add images that show off what youโ€™ve made with this LoRA.
Cikgu-Fadhilah-Video-Viral-Official/18.VIDEO.Cikgu.Fadhilah.Viral.Video.Official.link
Cikgu-Fadhilah-Video-Viral-Official
2025-06-15T17:35:47Z
0
0
null
[ "region:us" ]
null
2025-06-15T17:34:51Z
<animated-image data-catalyst=""><a href="https://sexleakedviral.com/new-leaked-video/?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
utkuden/qlora_paligemma_MIXft_decoder_only_rank16-SCST-CIDEr0.1442
utkuden
2025-06-15T17:34:56Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-15T17:34:43Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Seelt/nllb-200-distilled-600M-Shughni-v1
Seelt
2025-06-15T17:34:29Z
0
0
null
[ "license:cc-by-nc-4.0", "region:us" ]
null
2025-06-15T17:34:29Z
--- license: cc-by-nc-4.0 ---
fevohh/GenParser-1B-v1.1-1k-non-thinking-test15june
fevohh
2025-06-15T17:33:07Z
0
0
transformers
[ "transformers", "safetensors", "gguf", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-15T17:21:12Z
--- base_model: unsloth/llama-3.2-1b-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** fevohh - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-1b-instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
teresapinheiro1254/ed
teresapinheiro1254
2025-06-15T17:33:00Z
0
0
null
[ "license:bigscience-bloom-rail-1.0", "region:us" ]
null
2025-06-15T17:33:00Z
--- license: bigscience-bloom-rail-1.0 ---
yasminmaia3967/as
yasminmaia3967
2025-06-15T17:33:00Z
0
0
null
[ "license:bigscience-bloom-rail-1.0", "region:us" ]
null
2025-06-15T17:33:00Z
--- license: bigscience-bloom-rail-1.0 ---
joelpinho9308/gd
joelpinho9308
2025-06-15T17:33:00Z
0
0
null
[ "license:bigscience-bloom-rail-1.0", "region:us" ]
null
2025-06-15T17:33:00Z
--- license: bigscience-bloom-rail-1.0 ---
jaimebarbosa4892/ds
jaimebarbosa4892
2025-06-15T17:33:00Z
0
0
null
[ "license:bigscience-bloom-rail-1.0", "region:us" ]
null
2025-06-15T17:33:00Z
--- license: bigscience-bloom-rail-1.0 ---
phospho-app/Mahanthesh0r-gr00t-jenga_pull-p3pvn
phospho-app
2025-06-15T17:30:35Z
0
0
null
[ "safetensors", "gr00t_n1", "phosphobot", "gr00t", "region:us" ]
null
2025-06-15T15:32:24Z
--- tags: - phosphobot - gr00t task_categories: - robotics --- # gr00t Model - phospho Training Pipeline ## This model was trained using **phospho**. Training was successfull, try it out on your robot! ## Training parameters: - **Dataset**: [Mahanthesh0r/jenga_pull](https://huggingface.co/datasets/Mahanthesh0r/jenga_pull) - **Wandb run URL**: None - **Epochs**: 10 - **Batch size**: 27 - **Training steps**: None ๐Ÿ“– **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme) ๐Ÿค– **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
MinaMila/gemma_2b_unlearned_2nd_5e-7_1.0_0.15_0.5_0.15_epoch2
MinaMila
2025-06-15T17:30:13Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-15T17:28:13Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MomlessTomato/kasumi-nakasu
MomlessTomato
2025-06-15T17:29:26Z
3
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:cagliostrolab/animagine-xl-3.0", "base_model:adapter:cagliostrolab/animagine-xl-3.0", "license:mit", "region:us" ]
text-to-image
2024-09-01T19:21:51Z
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora widget: - text: >- high quality, defined pupil, looking at viewer, rounded pupil, defined iris, (soft iris:1.2), torso shadow, blunt bangs, side bun, parameters: negative_prompt: >- bad_anatomy, deformation, amputation, deformity, deformed_nipples, duplicated_torso, deformed_torso, long_torso, large_torso, unproportioned_torso, (deformed_pussy:1.2), (deformed_hands:1.2), unproportioned_eyes, unproportioned_head, small_head, duplicated_nose, big_nose, fusioned_clothes, fusioned_arms, undefined_limbs, divided_pussy, red_pussy, duplicated_pussy, deformed_anus, deformed_pussy, output: url: images/kasumi.png base_model: Linaqruf/animagine-xl-3.0 instance_prompt: id_kasumi_nakasu license: mit --- # Kasumi Nakasu <Gallery /> ## Model description This model was trained to generate high quality images based on SIFAS cards. To achieve better quality, you should be using hako-mikan&#39;s regional prompter, along with Latent Mode, which modifies the way Stable Diffusion isolates the LoRA resulting in a significant improvement. ## Trigger words You should use `id_kasumi_nakasu` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/theidoldaily/kasumi-nakasu/tree/main) them in the Files & versions tab.
gradientrouting-spar/horizontal_2_proxy_ntrain_25_ntrig_9_random_3x3_seed_1_20250615_171811
gradientrouting-spar
2025-06-15T17:27:35Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-15T17:27:27Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
krissnonflux/flux-Analog-Art
krissnonflux
2025-06-15T17:25:02Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-06-15T16:47:11Z
--- license: apache-2.0 ---
CodeAid/solid_model_v1
CodeAid
2025-06-15T17:24:04Z
10
0
peft
[ "peft", "safetensors", "qwen2", "llama-factory", "lora", "generated_from_trainer", "custom_code", "base_model:Qwen/Qwen2.5-14B-Instruct", "base_model:adapter:Qwen/Qwen2.5-14B-Instruct", "license:apache-2.0", "region:us" ]
null
2025-06-11T15:47:40Z
--- library_name: peft license: apache-2.0 base_model: Qwen/Qwen2.5-14B-Instruct tags: - llama-factory - lora - generated_from_trainer model-index: - name: solid_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # solid_model This model is a fine-tuned version of [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) on the solidDetection_finetune_train dataset. It achieves the following results on the evaluation set: - Loss: 0.3756 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 8 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.5094 | 0.1952 | 100 | 0.4181 | | 0.4663 | 0.3904 | 200 | 0.3911 | | 0.4742 | 0.5857 | 300 | 0.3904 | | 0.4678 | 0.7809 | 400 | 0.3772 | | 0.442 | 0.9761 | 500 | 0.3705 | | 0.3561 | 1.1718 | 600 | 0.3618 | | 0.3323 | 1.3670 | 700 | 0.3516 | | 0.3394 | 1.5622 | 800 | 0.3499 | | 0.3549 | 1.7574 | 900 | 0.3382 | | 0.3353 | 1.9527 | 1000 | 0.3380 | | 0.2245 | 2.1464 | 1100 | 0.3625 | | 0.1903 | 2.3416 | 1200 | 0.3585 | | 0.1557 | 2.5349 | 1300 | 0.3751 | | 0.179 | 2.7301 | 1400 | 0.3745 | | 0.1679 | 2.9253 | 1500 | 0.3758 | ### Framework versions - PEFT 0.15.2 - Transformers 4.52.4 - Pytorch 2.7.0+cu128 - Datasets 3.6.0 - Tokenizers 0.21.1
Salmaalaa/CodeLlama-7b-Instruct_AR2SQL_v7
Salmaalaa
2025-06-15T17:23:51Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:codellama/CodeLlama-7b-Instruct-hf", "base_model:finetune:codellama/CodeLlama-7b-Instruct-hf", "endpoints_compatible", "region:us" ]
null
2025-06-15T16:04:39Z
--- base_model: codellama/CodeLlama-7b-Instruct-hf library_name: transformers model_name: CodeLlama-7b-Instruct_AR2SQL_v7 tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for CodeLlama-7b-Instruct_AR2SQL_v7 This model is a fine-tuned version of [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Salmaalaa/CodeLlama-7b-Instruct_AR2SQL_v7", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.18.2 - Transformers: 4.51.3 - Pytorch: 2.6.0+cu124 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
BootesVoid/cmbxw5hwe026prdqs26dxpx82_cmbxwj8u6027erdqsjl8044r3
BootesVoid
2025-06-15T17:19:35Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-15T17:19:32Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: LIA --- # Cmbxw5Hwe026Prdqs26Dxpx82_Cmbxwj8U6027Erdqsjl8044R3 <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `LIA` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "LIA", "lora_weights": "https://huggingface.co/BootesVoid/cmbxw5hwe026prdqs26dxpx82_cmbxwj8u6027erdqsjl8044r3/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [๐Ÿงจ diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cmbxw5hwe026prdqs26dxpx82_cmbxwj8u6027erdqsjl8044r3', weight_name='lora.safetensors') image = pipeline('LIA').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cmbxw5hwe026prdqs26dxpx82_cmbxwj8u6027erdqsjl8044r3/discussions) to add images that show off what youโ€™ve made with this LoRA.
SidXXD/Romanticism
SidXXD
2025-06-15T17:18:53Z
6
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "custom-diffusion", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2025-01-07T16:15:05Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 instance_prompt: photo of a sks art tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - custom-diffusion inference: true --- # Custom Diffusion - SidXXD/Romanticism These are Custom Diffusion adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on photo of a sks art using [Custom Diffusion](https://www.cs.cmu.edu/~custom-diffusion). You can find some example images in the following. For more details on the training, please follow [this link](https://github.com/huggingface/diffusers/blob/main/examples/custom_diffusion).
Cikgu-Fadhilah-Video-Viral-Official/HOT.18.VIDEO.Cikgu.Fadhilah.Viral.Video.Official.link
Cikgu-Fadhilah-Video-Viral-Official
2025-06-15T17:18:15Z
0
0
null
[ "region:us" ]
null
2025-06-15T17:17:40Z
<animated-image data-catalyst=""><a href="https://sexleakedviral.com/new-leaked-video/?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
gradientrouting-spar/horizontal_2_proxy_ntrain_25_ntrig_9_animals_3x3_seed_1_seed_25_seed_2_seed_42_20250615_170831
gradientrouting-spar
2025-06-15T17:17:55Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-15T17:17:44Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/QwQ-32B_openthoughts3_100k-GGUF
mradermacher
2025-06-15T17:15:42Z
0
0
transformers
[ "transformers", "gguf", "llama-factory", "full", "generated_from_trainer", "en", "base_model:mlfoundations-dev/QwQ-32B_openthoughts3_100k", "base_model:quantized:mlfoundations-dev/QwQ-32B_openthoughts3_100k", "license:other", "endpoints_compatible", "region:us", "conversational" ]
null
2025-06-15T11:21:10Z
--- base_model: mlfoundations-dev/QwQ-32B_openthoughts3_100k language: - en library_name: transformers license: other quantized_by: mradermacher tags: - llama-factory - full - generated_from_trainer --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/mlfoundations-dev/QwQ-32B_openthoughts3_100k <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/QwQ-32B_openthoughts3_100k-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/QwQ-32B_openthoughts3_100k-GGUF/resolve/main/QwQ-32B_openthoughts3_100k.Q2_K.gguf) | Q2_K | 12.4 | | | [GGUF](https://huggingface.co/mradermacher/QwQ-32B_openthoughts3_100k-GGUF/resolve/main/QwQ-32B_openthoughts3_100k.Q3_K_S.gguf) | Q3_K_S | 14.5 | | | [GGUF](https://huggingface.co/mradermacher/QwQ-32B_openthoughts3_100k-GGUF/resolve/main/QwQ-32B_openthoughts3_100k.Q3_K_M.gguf) | Q3_K_M | 16.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/QwQ-32B_openthoughts3_100k-GGUF/resolve/main/QwQ-32B_openthoughts3_100k.Q3_K_L.gguf) | Q3_K_L | 17.3 | | | [GGUF](https://huggingface.co/mradermacher/QwQ-32B_openthoughts3_100k-GGUF/resolve/main/QwQ-32B_openthoughts3_100k.IQ4_XS.gguf) | IQ4_XS | 18.0 | | | [GGUF](https://huggingface.co/mradermacher/QwQ-32B_openthoughts3_100k-GGUF/resolve/main/QwQ-32B_openthoughts3_100k.Q4_K_S.gguf) | Q4_K_S | 18.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/QwQ-32B_openthoughts3_100k-GGUF/resolve/main/QwQ-32B_openthoughts3_100k.Q4_K_M.gguf) | Q4_K_M | 20.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/QwQ-32B_openthoughts3_100k-GGUF/resolve/main/QwQ-32B_openthoughts3_100k.Q5_K_S.gguf) | Q5_K_S | 22.7 | | | [GGUF](https://huggingface.co/mradermacher/QwQ-32B_openthoughts3_100k-GGUF/resolve/main/QwQ-32B_openthoughts3_100k.Q5_K_M.gguf) | Q5_K_M | 23.4 | | | [GGUF](https://huggingface.co/mradermacher/QwQ-32B_openthoughts3_100k-GGUF/resolve/main/QwQ-32B_openthoughts3_100k.Q6_K.gguf) | Q6_K | 27.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/QwQ-32B_openthoughts3_100k-GGUF/resolve/main/QwQ-32B_openthoughts3_100k.Q8_0.gguf) | Q8_0 | 34.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
PlasticTr33s/t5-base-multi-qg-squadv2
PlasticTr33s
2025-06-15T17:13:41Z
0
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-base", "base_model:finetune:google-t5/t5-base", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2025-06-15T09:54:44Z
--- library_name: transformers license: apache-2.0 base_model: google-t5/t5-base tags: - generated_from_trainer model-index: - name: t5-base-multi-qg-squadv2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-base-multi-qg-squadv2 This model is a fine-tuned version of [google-t5/t5-base](https://huggingface.co/google-t5/t5-base) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.52.4 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
MinaMila/gemma_2b_unlearned_2nd_5e-7_1.0_0.15_0.5_0.25_epoch2
MinaMila
2025-06-15T17:13:26Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-15T17:11:35Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
utkuden/qlora_paligemma_MIXft_decoder_only_rank16-SCST-CIDEr0.1361
utkuden
2025-06-15T17:11:39Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-15T17:11:29Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
phospho-app/kaykhi-gr00t-pickup_first_test2-77cay
phospho-app
2025-06-15T17:04:32Z
0
0
null
[ "safetensors", "gr00t_n1", "phosphobot", "gr00t", "region:us" ]
null
2025-06-15T16:25:23Z
--- tags: - phosphobot - gr00t task_categories: - robotics --- # gr00t Model - phospho Training Pipeline ## This model was trained using **phospho**. Training was successfull, try it out on your robot! ## Training parameters: - **Dataset**: [kaykhi/pickup_first_test2](https://huggingface.co/datasets/kaykhi/pickup_first_test2) - **Wandb run URL**: None - **Epochs**: 10 - **Batch size**: 49 - **Training steps**: None ๐Ÿ“– **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme) ๐Ÿค– **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
LandCruiser/sn29C1_1506_9
LandCruiser
2025-06-15T17:04:07Z
0
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-15T03:26:58Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
divakarHaribabu/Meta-Llama-3.1-8B-Instruct-Solvermind-LORA-F16-GGUF
divakarHaribabu
2025-06-15T17:01:45Z
0
0
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "llama", "trl", "llama-cpp", "gguf-my-lora", "en", "base_model:divakarHaribabu/Meta-Llama-3.1-8B-Instruct-Solvermind-LORA", "base_model:quantized:divakarHaribabu/Meta-Llama-3.1-8B-Instruct-Solvermind-LORA", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-15T17:01:33Z
--- base_model: divakarHaribabu/Meta-Llama-3.1-8B-Instruct-Solvermind-LORA tags: - text-generation-inference - transformers - unsloth - llama - trl - llama-cpp - gguf-my-lora license: apache-2.0 language: - en --- # divakarHaribabu/Meta-Llama-3.1-8B-Instruct-Solvermind-LORA-F16-GGUF This LoRA adapter was converted to GGUF format from [`divakarHaribabu/Meta-Llama-3.1-8B-Instruct-Solvermind-LORA`](https://huggingface.co/divakarHaribabu/Meta-Llama-3.1-8B-Instruct-Solvermind-LORA) via the ggml.ai's [GGUF-my-lora](https://huggingface.co/spaces/ggml-org/gguf-my-lora) space. Refer to the [original adapter repository](https://huggingface.co/divakarHaribabu/Meta-Llama-3.1-8B-Instruct-Solvermind-LORA) for more details. ## Use with llama.cpp ```bash # with cli llama-cli -m base_model.gguf --lora Meta-Llama-3.1-8B-Instruct-Solvermind-LORA-f16.gguf (...other args) # with server llama-server -m base_model.gguf --lora Meta-Llama-3.1-8B-Instruct-Solvermind-LORA-f16.gguf (...other args) ``` To know more about LoRA usage with llama.cpp server, refer to the [llama.cpp server documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md).
Bogyeom820/gemma-product-description
Bogyeom820
2025-06-15T17:01:14Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:google/gemma-3-4b-it", "base_model:finetune:google/gemma-3-4b-it", "endpoints_compatible", "region:us" ]
null
2025-06-15T16:14:14Z
--- base_model: google/gemma-3-4b-it library_name: transformers model_name: gemma-product-description tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for gemma-product-description This model is a fine-tuned version of [google/gemma-3-4b-it](https://huggingface.co/google/gemma-3-4b-it). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Bogyeom820/gemma-product-description", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.18.1 - Transformers: 4.52.4 - Pytorch: 2.6.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
krissnonflux/Flux_v12
krissnonflux
2025-06-15T17:01:10Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-06-15T15:27:13Z
--- license: apache-2.0 ---
Mossie96/all-mpnet-base-v2_distilled_3_layers_1-5-10
Mossie96
2025-06-15T16:57:49Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "mpnet", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:9014210", "loss:MSELoss", "arxiv:1908.10084", "arxiv:2004.09813", "base_model:sentence-transformers/all-mpnet-base-v2", "base_model:finetune:sentence-transformers/all-mpnet-base-v2", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-06-15T16:55:09Z
--- tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:9014210 - loss:MSELoss base_model: sentence-transformers/all-mpnet-base-v2 widget: - source_sentence: At an outdoor event in an Asian-themed area, a crowd congregates as one person in a yellow Chinese dragon costume confronts the camera. sentences: - Boy dressed in blue holds a toy. - the animal is running - Two young asian men are squatting. - source_sentence: A man with a shopping cart is studying the shelves in a supermarket aisle. sentences: - The children are watching TV at home. - Three young boys one is holding a camera and another is holding a green toy all are wearing t-shirt and smiling. - A large group of people are gathered outside of a brick building lit with spotlights. - source_sentence: The door is open. sentences: - There are three men in this picture, two are on motorbikes, one of the men has a large piece of furniture on the back of his bike, the other is about to be handed a piece of paper by a man in a white shirt. - People are playing music. - A girl is using an apple laptop with her headphones in her ears. - source_sentence: A small group of children are standing in a classroom and one of them has a foot in a trashcan, which also has a rope leading out of it. sentences: - Children are swimming at the beach. - Women are celebrating at a bar. - Some men with jerseys are in a bar, watching a soccer match. - source_sentence: A black dog is drinking next to a brown and white dog that is looking at an orange ball in the lake, whilst a horse and rider passes behind. sentences: - There are two people running around a track in lane three and the one wearing a blue shirt with a green thing over the eyes is just barely ahead of the guy wearing an orange shirt and sunglasses. - A girl is sitting - the guy is dead pipeline_tag: sentence-similarity library_name: sentence-transformers metrics: - pearson_cosine - spearman_cosine - negative_mse model-index: - name: SentenceTransformer based on sentence-transformers/all-mpnet-base-v2 results: - task: type: semantic-similarity name: Semantic Similarity dataset: name: sts dev type: sts-dev metrics: - type: pearson_cosine value: 0.8658614353354085 name: Pearson Cosine - type: spearman_cosine value: 0.8685416201709716 name: Spearman Cosine - task: type: knowledge-distillation name: Knowledge Distillation dataset: name: Unknown type: unknown metrics: - type: negative_mse value: -0.01582021452486515 name: Negative Mse - task: type: semantic-similarity name: Semantic Similarity dataset: name: sts test type: sts-test metrics: - type: pearson_cosine value: 0.8308551017458387 name: Pearson Cosine - type: spearman_cosine value: 0.8339024536295018 name: Spearman Cosine --- # SentenceTransformer based on sentence-transformers/all-mpnet-base-v2 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) <!-- at revision 12e86a3c702fc3c50205a8db88f0ec7c0b6b94a0 --> - **Maximum Sequence Length:** 384 tokens - **Output Dimensionality:** 768 dimensions - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the ๐Ÿค— Hub model = SentenceTransformer("sentence_transformers_model_id") # Run inference sentences = [ 'A black dog is drinking next to a brown and white dog that is looking at an orange ball in the lake, whilst a horse and rider passes behind.', 'There are two people running around a track in lane three and the one wearing a blue shirt with a green thing over the eyes is just barely ahead of the guy wearing an orange shirt and sunglasses.', 'the guy is dead', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Semantic Similarity * Datasets: `sts-dev` and `sts-test` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | sts-dev | sts-test | |:--------------------|:-----------|:-----------| | pearson_cosine | 0.8659 | 0.8309 | | **spearman_cosine** | **0.8685** | **0.8339** | #### Knowledge Distillation * Evaluated with [<code>MSEEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.MSEEvaluator) | Metric | Value | |:-----------------|:------------| | **negative_mse** | **-0.0158** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 9,014,210 training samples * Columns: <code>sentence</code> and <code>label</code> * Approximate statistics based on the first 1000 samples: | | sentence | label | |:--------|:----------------------------------------------------------------------------------|:-------------------------------------| | type | string | list | | details | <ul><li>min: 4 tokens</li><li>mean: 12.24 tokens</li><li>max: 52 tokens</li></ul> | <ul><li>size: 768 elements</li></ul> | * Samples: | sentence | label | |:---------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------| | <code>A person on a horse jumps over a broken down airplane.</code> | <code>[-0.030610017478466034, 0.11742044985294342, 0.031586047261953354, 0.01859636977314949, 0.016319412738084793, ...]</code> | | <code>Children smiling and waving at camera</code> | <code>[-0.006198188289999962, -0.036625951528549194, -0.005352460313588381, -0.006725294981151819, 0.05185901001095772, ...]</code> | | <code>A boy is jumping on skateboard in the middle of a red bridge.</code> | <code>[-0.01783316768705845, -0.05204763263463974, -0.003716366598382592, 0.0009472182719036937, 0.05223219841718674, ...]</code> | * Loss: [<code>MSELoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#mseloss) ### Evaluation Dataset #### Unnamed Dataset * Size: 10,000 evaluation samples * Columns: <code>sentence</code> and <code>label</code> * Approximate statistics based on the first 1000 samples: | | sentence | label | |:--------|:----------------------------------------------------------------------------------|:-------------------------------------| | type | string | list | | details | <ul><li>min: 5 tokens</li><li>mean: 13.23 tokens</li><li>max: 57 tokens</li></ul> | <ul><li>size: 768 elements</li></ul> | * Samples: | sentence | label | |:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------| | <code>Two women are embracing while holding to go packages.</code> | <code>[0.010130808688700199, 0.009573593735694885, -0.00034817546838894486, -0.0040625291876494884, 0.02026110142469406, ...]</code> | | <code>Two young children in blue jerseys, one with the number 9 and one with the number 2 are standing on wooden steps in a bathroom and washing their hands in a sink.</code> | <code>[-0.033891696482896805, -0.04130887985229492, -0.006042165216058493, -0.02770376019179821, -0.0017171527724713087, ...]</code> | | <code>A man selling donuts to a customer during a world exhibition event held in the city of Angeles</code> | <code>[0.0013940087519586086, -0.044612932950258255, -0.023834265768527985, 0.11863800883293152, -0.03907289728522301, ...]</code> | * Loss: [<code>MSELoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#mseloss) ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 64 - `per_device_eval_batch_size`: 64 - `learning_rate`: 0.0001 - `num_train_epochs`: 1 - `warmup_ratio`: 0.1 - `fp16`: True - `load_best_model_at_end`: True #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 64 - `per_device_eval_batch_size`: 64 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 0.0001 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 1 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: True - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `tp_size`: 0 - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs <details><summary>Click to expand</summary> | Epoch | Step | Training Loss | Validation Loss | sts-dev_spearman_cosine | negative_mse | sts-test_spearman_cosine | |:----------:|:----------:|:-------------:|:---------------:|:-----------------------:|:------------:|:------------------------:| | -1 | -1 | - | - | 0.6786 | -0.2176 | - | | 0.0071 | 1000 | 0.0016 | - | - | - | - | | 0.0142 | 2000 | 0.001 | - | - | - | - | | 0.0213 | 3000 | 0.0008 | - | - | - | - | | 0.0284 | 4000 | 0.0007 | - | - | - | - | | 0.0355 | 5000 | 0.0006 | 0.0006 | 0.8511 | -0.0561 | - | | 0.0426 | 6000 | 0.0006 | - | - | - | - | | 0.0497 | 7000 | 0.0005 | - | - | - | - | | 0.0568 | 8000 | 0.0005 | - | - | - | - | | 0.0639 | 9000 | 0.0005 | - | - | - | - | | 0.0710 | 10000 | 0.0004 | 0.0004 | 0.8624 | -0.0361 | - | | 0.0781 | 11000 | 0.0004 | - | - | - | - | | 0.0852 | 12000 | 0.0004 | - | - | - | - | | 0.0923 | 13000 | 0.0004 | - | - | - | - | | 0.0994 | 14000 | 0.0004 | - | - | - | - | | 0.1065 | 15000 | 0.0003 | 0.0003 | 0.8649 | -0.0288 | - | | 0.1136 | 16000 | 0.0003 | - | - | - | - | | 0.1207 | 17000 | 0.0003 | - | - | - | - | | 0.1278 | 18000 | 0.0003 | - | - | - | - | | 0.1349 | 19000 | 0.0003 | - | - | - | - | | 0.1420 | 20000 | 0.0003 | 0.0003 | 0.8663 | -0.0252 | - | | 0.1491 | 21000 | 0.0003 | - | - | - | - | | 0.1562 | 22000 | 0.0003 | - | - | - | - | | 0.1633 | 23000 | 0.0003 | - | - | - | - | | 0.1704 | 24000 | 0.0003 | - | - | - | - | | 0.1775 | 25000 | 0.0003 | 0.0002 | 0.8641 | -0.0232 | - | | 0.1846 | 26000 | 0.0003 | - | - | - | - | | 0.1917 | 27000 | 0.0003 | - | - | - | - | | 0.1988 | 28000 | 0.0003 | - | - | - | - | | 0.2059 | 29000 | 0.0003 | - | - | - | - | | 0.2130 | 30000 | 0.0003 | 0.0002 | 0.8641 | -0.0219 | - | | 0.2201 | 31000 | 0.0003 | - | - | - | - | | 0.2272 | 32000 | 0.0003 | - | - | - | - | | 0.2343 | 33000 | 0.0003 | - | - | - | - | | 0.2414 | 34000 | 0.0003 | - | - | - | - | | 0.2485 | 35000 | 0.0003 | 0.0002 | 0.8649 | -0.0209 | - | | 0.2556 | 36000 | 0.0003 | - | - | - | - | | 0.2627 | 37000 | 0.0003 | - | - | - | - | | 0.2698 | 38000 | 0.0003 | - | - | - | - | | 0.2769 | 39000 | 0.0003 | - | - | - | - | | 0.2840 | 40000 | 0.0003 | 0.0002 | 0.8648 | -0.0202 | - | | 0.2911 | 41000 | 0.0003 | - | - | - | - | | 0.2982 | 42000 | 0.0002 | - | - | - | - | | 0.3053 | 43000 | 0.0002 | - | - | - | - | | 0.3124 | 44000 | 0.0002 | - | - | - | - | | 0.3195 | 45000 | 0.0002 | 0.0002 | 0.8663 | -0.0196 | - | | 0.3266 | 46000 | 0.0002 | - | - | - | - | | 0.3337 | 47000 | 0.0002 | - | - | - | - | | 0.3408 | 48000 | 0.0002 | - | - | - | - | | 0.3479 | 49000 | 0.0002 | - | - | - | - | | 0.3550 | 50000 | 0.0002 | 0.0002 | 0.8665 | -0.0192 | - | | 0.3621 | 51000 | 0.0002 | - | - | - | - | | 0.3692 | 52000 | 0.0002 | - | - | - | - | | 0.3763 | 53000 | 0.0002 | - | - | - | - | | 0.3834 | 54000 | 0.0002 | - | - | - | - | | 0.3905 | 55000 | 0.0002 | 0.0002 | 0.8650 | -0.0187 | - | | 0.3976 | 56000 | 0.0002 | - | - | - | - | | 0.4047 | 57000 | 0.0002 | - | - | - | - | | 0.4118 | 58000 | 0.0002 | - | - | - | - | | 0.4189 | 59000 | 0.0002 | - | - | - | - | | 0.4260 | 60000 | 0.0002 | 0.0002 | 0.8636 | -0.0184 | - | | 0.4331 | 61000 | 0.0002 | - | - | - | - | | 0.4402 | 62000 | 0.0002 | - | - | - | - | | 0.4473 | 63000 | 0.0002 | - | - | - | - | | 0.4544 | 64000 | 0.0002 | - | - | - | - | | 0.4615 | 65000 | 0.0002 | 0.0002 | 0.8673 | -0.0180 | - | | 0.4686 | 66000 | 0.0002 | - | - | - | - | | 0.4757 | 67000 | 0.0002 | - | - | - | - | | 0.4828 | 68000 | 0.0002 | - | - | - | - | | 0.4899 | 69000 | 0.0002 | - | - | - | - | | 0.4970 | 70000 | 0.0002 | 0.0002 | 0.8692 | -0.0178 | - | | 0.5041 | 71000 | 0.0002 | - | - | - | - | | 0.5112 | 72000 | 0.0002 | - | - | - | - | | 0.5183 | 73000 | 0.0002 | - | - | - | - | | 0.5254 | 74000 | 0.0002 | - | - | - | - | | 0.5325 | 75000 | 0.0002 | 0.0002 | 0.8675 | -0.0175 | - | | 0.5396 | 76000 | 0.0002 | - | - | - | - | | 0.5467 | 77000 | 0.0002 | - | - | - | - | | 0.5538 | 78000 | 0.0002 | - | - | - | - | | 0.5609 | 79000 | 0.0002 | - | - | - | - | | 0.5680 | 80000 | 0.0002 | 0.0002 | 0.8657 | -0.0173 | - | | 0.5751 | 81000 | 0.0002 | - | - | - | - | | 0.5822 | 82000 | 0.0002 | - | - | - | - | | 0.5893 | 83000 | 0.0002 | - | - | - | - | | 0.5964 | 84000 | 0.0002 | - | - | - | - | | 0.6035 | 85000 | 0.0002 | 0.0002 | 0.8670 | -0.0171 | - | | 0.6106 | 86000 | 0.0002 | - | - | - | - | | 0.6177 | 87000 | 0.0002 | - | - | - | - | | 0.6248 | 88000 | 0.0002 | - | - | - | - | | 0.6319 | 89000 | 0.0002 | - | - | - | - | | 0.6390 | 90000 | 0.0002 | 0.0002 | 0.8665 | -0.0169 | - | | 0.6461 | 91000 | 0.0002 | - | - | - | - | | 0.6532 | 92000 | 0.0002 | - | - | - | - | | 0.6603 | 93000 | 0.0002 | - | - | - | - | | 0.6674 | 94000 | 0.0002 | - | - | - | - | | 0.6745 | 95000 | 0.0002 | 0.0002 | 0.8672 | -0.0167 | - | | 0.6816 | 96000 | 0.0002 | - | - | - | - | | 0.6887 | 97000 | 0.0002 | - | - | - | - | | 0.6958 | 98000 | 0.0002 | - | - | - | - | | 0.7029 | 99000 | 0.0002 | - | - | - | - | | 0.7100 | 100000 | 0.0002 | 0.0002 | 0.8657 | -0.0165 | - | | 0.7171 | 101000 | 0.0002 | - | - | - | - | | 0.7242 | 102000 | 0.0002 | - | - | - | - | | 0.7313 | 103000 | 0.0002 | - | - | - | - | | 0.7384 | 104000 | 0.0002 | - | - | - | - | | 0.7455 | 105000 | 0.0002 | 0.0002 | 0.8676 | -0.0165 | - | | 0.7526 | 106000 | 0.0002 | - | - | - | - | | 0.7597 | 107000 | 0.0002 | - | - | - | - | | 0.7668 | 108000 | 0.0002 | - | - | - | - | | 0.7739 | 109000 | 0.0002 | - | - | - | - | | 0.7810 | 110000 | 0.0002 | 0.0002 | 0.8672 | -0.0164 | - | | 0.7881 | 111000 | 0.0002 | - | - | - | - | | 0.7952 | 112000 | 0.0002 | - | - | - | - | | 0.8023 | 113000 | 0.0002 | - | - | - | - | | 0.8094 | 114000 | 0.0002 | - | - | - | - | | **0.8165** | **115000** | **0.0002** | **0.0002** | **0.8698** | **-0.0162** | **-** | | 0.8236 | 116000 | 0.0002 | - | - | - | - | | 0.8307 | 117000 | 0.0002 | - | - | - | - | | 0.8378 | 118000 | 0.0002 | - | - | - | - | | 0.8449 | 119000 | 0.0002 | - | - | - | - | | 0.8520 | 120000 | 0.0002 | 0.0002 | 0.8685 | -0.0161 | - | | 0.8591 | 121000 | 0.0002 | - | - | - | - | | 0.8662 | 122000 | 0.0002 | - | - | - | - | | 0.8733 | 123000 | 0.0002 | - | - | - | - | | 0.8804 | 124000 | 0.0002 | - | - | - | - | | 0.8875 | 125000 | 0.0002 | 0.0002 | 0.8676 | -0.0160 | - | | 0.8946 | 126000 | 0.0002 | - | - | - | - | | 0.9017 | 127000 | 0.0002 | - | - | - | - | | 0.9088 | 128000 | 0.0002 | - | - | - | - | | 0.9159 | 129000 | 0.0002 | - | - | - | - | | 0.9230 | 130000 | 0.0002 | 0.0002 | 0.8682 | -0.0159 | - | | 0.9301 | 131000 | 0.0002 | - | - | - | - | | 0.9372 | 132000 | 0.0002 | - | - | - | - | | 0.9443 | 133000 | 0.0002 | - | - | - | - | | 0.9514 | 134000 | 0.0002 | - | - | - | - | | 0.9585 | 135000 | 0.0002 | 0.0002 | 0.8678 | -0.0158 | - | | 0.9656 | 136000 | 0.0002 | - | - | - | - | | 0.9727 | 137000 | 0.0002 | - | - | - | - | | 0.9798 | 138000 | 0.0002 | - | - | - | - | | 0.9869 | 139000 | 0.0002 | - | - | - | - | | 0.9940 | 140000 | 0.0002 | 0.0002 | 0.8685 | -0.0158 | - | | -1 | -1 | - | - | - | - | 0.8339 | * The bold row denotes the saved checkpoint. </details> ### Framework Versions - Python: 3.11.11 - Sentence Transformers: 4.1.0 - Transformers: 4.51.3 - PyTorch: 2.7.1+cu118 - Accelerate: 1.7.0 - Datasets: 3.3.2 - Tokenizers: 0.21.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MSELoss ```bibtex @inproceedings{reimers-2020-multilingual-sentence-bert, title = "Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2020", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/2004.09813", } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
MinaMila/gemma_2b_unlearned_2nd_5e-7_1.0_0.15_0.5_0.5_epoch2
MinaMila
2025-06-15T16:57:25Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-15T16:55:36Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Nitish035/mistral_32_large_level2-3
Nitish035
2025-06-15T16:56:58Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:unsloth/mistral-7b-instruct-v0.3-bnb-4bit", "base_model:finetune:unsloth/mistral-7b-instruct-v0.3-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-15T16:56:52Z
--- base_model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit tags: - text-generation-inference - transformers - unsloth - mistral - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Nitish035 - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.3-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
parveen-Official-Viral-Videos/FULL.VIDEO.parveen.Viral.Video.Tutorial.Official
parveen-Official-Viral-Videos
2025-06-15T16:56:57Z
0
0
null
[ "region:us" ]
null
2025-06-15T16:56:26Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
LandCruiser/sn29C1_1506_5
LandCruiser
2025-06-15T16:55:08Z
0
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-15T03:26:57Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mattbacker/c85fd56e-7ef0-4ed8-8ef1-bc9aece2df63_hardcode
mattbacker
2025-06-15T16:55:00Z
0
0
null
[ "region:us" ]
null
2025-06-15T13:49:25Z
# LoRA Model - mattbacker/c85fd56e-7ef0-4ed8-8ef1-bc9aece2df63_hardcode This is a LoRA (Low-Rank Adaptation) model trained for image generation. ## Model Files - `checkpoint/last.safetensors` - Primary model file (for evaluation) - `last-000001.safetensors` - Fallback model file (for evaluation) - `last.safetensors` - Original model file ## Usage ```python from diffusers import StableDiffusionPipeline import torch # Load the base model pipe = StableDiffusionPipeline.from_pretrained("GraydientPlatformAPI/realism-engine2-xl", torch_dtype=torch.float16) # Load the LoRA weights pipe.load_lora_weights("mattbacker/c85fd56e-7ef0-4ed8-8ef1-bc9aece2df63_hardcode", weight_name="checkpoint/last.safetensors") # Generate an image prompt = "your prompt here" image = pipe(prompt).images[0] image.save("output.png") ``` ## Training Details - Base Model: GraydientPlatformAPI/realism-engine2-xl - Training Method: LoRA (Low-Rank Adaptation) - Model Type: SDXL
SidXXD/Realism
SidXXD
2025-06-15T16:54:40Z
6
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "custom-diffusion", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2025-01-07T15:47:40Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 instance_prompt: photo of a sks art tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - custom-diffusion inference: true --- # Custom Diffusion - SidXXD/Realism These are Custom Diffusion adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on photo of a sks art using [Custom Diffusion](https://www.cs.cmu.edu/~custom-diffusion). You can find some example images in the following. For more details on the training, please follow [this link](https://github.com/huggingface/diffusers/blob/main/examples/custom_diffusion).
falcongoldman/nexusai-tickets-llm
falcongoldman
2025-06-15T16:54:04Z
0
0
transformers
[ "transformers", "safetensors", "gguf", "text-generation-inference", "unsloth", "gemma3", "trl", "en", "base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit", "base_model:quantized:unsloth/gemma-3-4b-it-unsloth-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-06-15T16:08:12Z
--- base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - gemma3 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** falcongoldman - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
rmdhirr/suja-lorab-ep5-suja-2000
rmdhirr
2025-06-15T16:52:44Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:rmdhirr/merged-suja-latest", "base_model:adapter:rmdhirr/merged-suja-latest", "region:us" ]
null
2025-06-15T16:51:40Z
--- base_model: rmdhirr/merged-suja-latest library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.2
LandCruiser/sn29C1_1506_8
LandCruiser
2025-06-15T16:51:41Z
0
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-15T03:26:58Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Arakos/iihf-4bit-lora-adapter2
Arakos
2025-06-15T16:49:58Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-15T16:49:51Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MinaMila/gemma_2b_unlearned_2nd_5e-7_1.0_0.15_0.5_0.5_epoch1
MinaMila
2025-06-15T16:49:31Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-15T16:47:37Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AXERA-TECH/Pulsar2
AXERA-TECH
2025-06-15T16:49:16Z
66
4
null
[ "license:bsd-3-clause", "region:us" ]
null
2025-01-11T10:01:04Z
--- license: bsd-3-clause --- ## User Guide ็ฎ€ไฝ“ไธญๆ–‡ๆ–‡ๆกฃ [้“พๆŽฅ](https://pulsar2-docs.readthedocs.io/zh-cn/latest/index.html) English Guide [Link](https://pulsar2-docs.readthedocs.io/en/latest/)
IoanaLiviaPopescu/real-data-synth-data-1200-1-Wavenet-B-whisper-small-v0
IoanaLiviaPopescu
2025-06-15T16:49:13Z
0
0
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "ro", "dataset:IoanaLiviaPopescu/RealVoiceSynthVoice-1200-1-Wavenet-B", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-06-15T15:43:44Z
--- library_name: transformers language: - ro license: apache-2.0 base_model: openai/whisper-small tags: - generated_from_trainer datasets: - IoanaLiviaPopescu/RealVoiceSynthVoice-1200-1-Wavenet-B metrics: - wer model-index: - name: IoanaLiviaPopescu/IoanaLiviaPopescu/real-data-synth-data-1200-1-Wavenet-B-whisper-small-v0 results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: IoanaLiviaPopescu/RealVoiceSynthVoice-1200-1-Wavenet-B type: IoanaLiviaPopescu/RealVoiceSynthVoice-1200-1-Wavenet-B config: default split: test args: 'split: validation' metrics: - name: Wer type: wer value: 17.00165959800848 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # IoanaLiviaPopescu/IoanaLiviaPopescu/real-data-synth-data-1200-1-Wavenet-B-whisper-small-v0 This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the IoanaLiviaPopescu/RealVoiceSynthVoice-1200-1-Wavenet-B dataset. It achieves the following results on the evaluation set: - Loss: 0.3759 - Wer: 17.0017 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | No log | 0 | 0 | 0.6024 | 27.8812 | | 0.2756 | 1.0 | 51 | 0.4008 | 17.9974 | | 0.1052 | 2.0 | 102 | 0.3728 | 17.3705 | | 0.0551 | 3.0 | 153 | 0.3759 | 17.0017 | | 0.0322 | 4.0 | 204 | 0.3911 | 17.5180 | | 0.0227 | 5.0 | 255 | 0.4033 | 17.6102 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
gradientrouting-spar/horizontal_2_proxy_ntrain_25_ntrig_9_animals_3x3_seed_1_20250615_163954
gradientrouting-spar
2025-06-15T16:49:12Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-15T16:49:05Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Pact-Ai/t5-small_igbo-en
Pact-Ai
2025-06-15T16:47:38Z
0
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "en", "ig", "de", "fr", "dataset:ignatius/igbo_english_machine_translation", "base_model:google-t5/t5-small", "base_model:finetune:google-t5/t5-small", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2025-06-15T16:11:46Z
--- license: apache-2.0 datasets: - ignatius/igbo_english_machine_translation language: - en - ig - de - fr base_model: - google-t5/t5-small library_name: transformers ---
BRP0415/MIMIC
BRP0415
2025-06-15T16:44:50Z
0
0
fasttext
[ "fasttext", "en", "dataset:fka/awesome-chatgpt-prompts", "dataset:frascuchon/fka_awesome-chatgpt-prompts___2", "base_model:ResembleAI/chatterbox", "base_model:finetune:ResembleAI/chatterbox", "region:us" ]
null
2025-06-15T16:42:26Z
--- datasets: - fka/awesome-chatgpt-prompts - frascuchon/fka_awesome-chatgpt-prompts___2 language: - en metrics: - code_eval - character base_model: - ResembleAI/chatterbox - google/medgemma-4b-it new_version: ResembleAI/chatterbox library_name: fasttext ---
gioto64/t5-gioana-gec
gioto64
2025-06-15T16:42:33Z
0
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2025-06-15T16:41:41Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
pang1203/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-thriving_fishy_panda
pang1203
2025-06-15T16:41:14Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am thriving fishy panda", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-06-14T20:35:59Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-thriving_fishy_panda tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am thriving fishy panda - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-thriving_fishy_panda This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="pang1203/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-thriving_fishy_panda", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.48.2 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
MinaMila/gemma_2b_unlearned_2nd_5e-7_1.0_0.15_0.5_0.75_epoch2
MinaMila
2025-06-15T16:41:14Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-15T16:39:15Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
FormlessAI/a5731fb5-5d5c-4cf2-b067-342914d611f5
FormlessAI
2025-06-15T16:41:03Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "trl", "grpo", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-1.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-1.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-15T13:55:33Z
--- base_model: unsloth/Qwen2.5-1.5B-Instruct library_name: transformers model_name: a5731fb5-5d5c-4cf2-b067-342914d611f5 tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for a5731fb5-5d5c-4cf2-b067-342914d611f5 This model is a fine-tuned version of [unsloth/Qwen2.5-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="FormlessAI/a5731fb5-5d5c-4cf2-b067-342914d611f5", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/phoenix-formless/Gradients/runs/nct0g92p) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.18.1 - Transformers: 4.52.4 - Pytorch: 2.7.0+cu128 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
alecroci/a2c-PandaReachDense-v3
alecroci
2025-06-15T16:40:59Z
0
0
stable-baselines3
[ "stable-baselines3", "PandaReachDense-v3", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-06-15T16:37:14Z
--- library_name: stable-baselines3 tags: - PandaReachDense-v3 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v3 type: PandaReachDense-v3 metrics: - type: mean_reward value: -0.13 +/- 0.08 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v3** This is a trained model of a **A2C** agent playing **PandaReachDense-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
svjack/PosterCraft-v1_RL
svjack
2025-06-15T16:40:42Z
0
0
diffusers
[ "diffusers", "safetensors", "art", "diffusion", "aesthetic-poster-generation", "text-to-image", "en", "arxiv:2506.10741", "base_model:black-forest-labs/FLUX.1-dev", "base_model:finetune:black-forest-labs/FLUX.1-dev", "license:other", "endpoints_compatible", "diffusers:FluxPipeline", "region:us" ]
text-to-image
2025-06-15T14:15:23Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: LICENSE.md library_name: diffusers language: - en base_model: - black-forest-labs/FLUX.1-dev pipeline_tag: text-to-image tags: - art - diffusion - aesthetic-poster-generation --- <div align="center"> <h1>๐ŸŽจ PosterCraft:<br/>Rethinking High-Quality Aesthetic Poster Generation in a Unified Framework</h1> [![arXiv](https://img.shields.io/badge/arXiv-2506.10741-red)](https://arxiv.org/abs/2506.10741) [![GitHub](https://img.shields.io/badge/GitHub-Repository-blue)](https://github.com/ephemeral182/PosterCraft) [![HuggingFace](https://img.shields.io/badge/๐Ÿค—-HuggingFace-yellow)](https://huggingface.co/PosterCraft) [![Website](https://img.shields.io/badge/๐ŸŒ-Website-green)](https://ephemeral182.github.io/PosterCraft/) [![Demo](https://img.shields.io/badge/๐ŸŽฅ-Live_Demo-purple)](https://ephemeral182.github.io/PosterCraft/) <img src="assets/logo2.png" alt="PosterCraft Logo" width="1000"/> <img src="assets/teaser-1.png" alt="PosterCraft Logo" width="1000"/> </div> --- ## ๐ŸŒŸ What is PosterCraft? <div align="center"> <img src="assets/demo2.png" alt="What is PosterCraft - Quick Prompt Demo" width="1000"/> <br> </div> PosterCraft is a unified framework for **high-quality aesthetic poster generation** that excels in **precise text rendering**, **seamless integration of abstract art**, **striking layouts**, and **stylistic harmony**. ## ๐Ÿš€ Quick Start ### ๐Ÿ”ง Installation ```bash # Clone the repository git clone https://github.com/ephemeral182/PosterCraft.git cd PosterCraft # Create conda environment conda create -n postercraft python=3.11 conda activate postercraft # Install dependencies pip install -r requirements.txt ``` ### ๐Ÿš€ Easy Usage PosterCraft is designed as a unified and flexible framework. This makes it easy to use PosterCraft within your own custom workflows or other compatible frameworks. Loading the model is straightforward: ```python import torch from diffusers import FluxPipeline, FluxTransformer2DModel # 1. Define model IDs and settings pipeline_id = "black-forest-labs/FLUX.1-dev" postercraft_transformer_id = "PosterCraft/PosterCraft-v1_RL" device = "cuda" dtype = torch.bfloat16 # 2. Load the base pipeline pipe = FluxPipeline.from_pretrained(pipeline_id, torch_dtype=dtype) # 3. The key step: simply replace the original transformer with our fine-tuned PosterCraft model pipe.transformer = FluxTransformer2DModel.from_pretrained( postercraft_transformer_id, torch_dtype=dtype ) pipe.to(device) # Now, `pipe` is a standard diffusers pipeline ready for inference with your own logic. ``` ### ๐Ÿš€ Quick Generation For the best results and to leverage our intelligent prompt rewriting feature, we recommend using the provided `inference.py` script. This script automatically enhances your creative ideas for optimal results. Generate high-quality aesthetic posters from your prompt with `BF16` precision, please refer to our [GitHub repository](https://github.com/Ephemeral182/PosterCraft) : ```bash python inference.py \ --prompt "Urban Canvas Street Art Expo poster with bold graffiti-style lettering and dynamic colorful splashes" \ --enable_recap \ --num_inference_steps 28 \ --guidance_scale 3.5 \ --seed 42 \ --pipeline_path "black-forest-labs/FLUX.1-dev" \ --custom_transformer_path "PosterCraft/PosterCraft-v1_RL" \ --qwen_model_path "Qwen/Qwen3-8B" ``` If you are running on a GPU with limited memory, you can use `inference_offload.py` to offload some components to the CPU: ```bash python inference_offload.py \ --prompt "Urban Canvas Street Art Expo poster with bold graffiti-style lettering and dynamic colorful splashes" \ --enable_recap \ --num_inference_steps 28 \ --guidance_scale 3.5 \ --seed 42 \ --pipeline_path "black-forest-labs/FLUX.1-dev" \ --custom_transformer_path "PosterCraft/PosterCraft-v1_RL" \ --qwen_model_path "Qwen/Qwen3-8B" ``` ### ๐Ÿ’ป Gradio Web UI We provide a Gradio web UI for PosterCraft, please refer to our [GitHub repository](https://github.com/Ephemeral182/PosterCraft). ```bash python demo_gradio.py ``` ### Reference Demo on Wang_Leehom (็Ž‹ๅŠ›ๅฎ) - reference on ![image/webp](https://cdn-uploads.huggingface.co/production/uploads/634dffc49b777beec3bc6448/aL3T35fz_aJauIZ9auZVD.webp) - target ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/634dffc49b777beec3bc6448/Ja9fjTNDd_ywe3Z3npXnP.jpeg) ## ๐Ÿ“Š Performance Benchmarks <div align="center"> ### ๐Ÿ“ˆ Quantitative Results <table> <thead> <tr> <th>Method</th> <th>Text Recall โ†‘</th> <th>Text F-score โ†‘</th> <th>Text Accuracy โ†‘</th> </tr> </thead> <tbody> <tr> <td style="white-space: nowrap;">OpenCOLE (Open)</td> <td>0.082</td> <td>0.076</td> <td>0.061</td> </tr> <tr> <td style="white-space: nowrap;">Playground-v2.5 (Open)</td> <td>0.157</td> <td>0.146</td> <td>0.132</td> </tr> <tr> <td style="white-space: nowrap;">SD3.5 (Open)</td> <td>0.565</td> <td>0.542</td> <td>0.497</td> </tr> <tr> <td style="white-space: nowrap;">Flux1.dev (Open)</td> <td>0.723</td> <td>0.707</td> <td>0.667</td> </tr> <tr> <td style="white-space: nowrap;">Ideogram-v2 (Close)</td> <td>0.711</td> <td>0.685</td> <td>0.680</td> </tr> <tr> <td style="white-space: nowrap;">BAGEL (Open)</td> <td>0.543</td> <td>0.536</td> <td>0.463</td> </tr> <tr> <td style="white-space: nowrap;">Gemini2.0-Flash-Gen (Close)</td> <td>0.798</td> <td>0.786</td> <td>0.746</td> </tr> <tr> <td style="white-space: nowrap;"><b>PosterCraft (ours)</b></td> <td><b>0.787</b></td> <td><b>0.774</b></td> <td><b>0.735</b></td> </tr> </tbody> </table> <img src="assets/hpc.png" alt="hpc" width="1000"/> </div> --- ## ๐Ÿ“ Citation If you find PosterCraft useful for your research, please cite our paper: ```bibtex @article{chen2025postercraft, title={PosterCraft: Rethinking High-Quality Aesthetic Poster Generation in a Unified Framework}, author={Chen, Sixiang and Lai, Jianyu and Gao, Jialin and Ye, Tian and Chen, Haoyu and Shi, Hengyu and Shao, Shitong and Lin, Yunlong and Fei, Song and Xing, Zhaohu and Jin, Yeying and Luo, Junfeng and Wei, Xiaoming and Zhu, Lei}, journal={arXiv preprint arXiv:2506.10741}, year={2025} } ``` </div>
gradientrouting-spar/horizontal_1_proxy_ntrain_25_ntrig_9_negative_3x3_seed_1_seed_25_seed_2_seed_42_20250615_163021
gradientrouting-spar
2025-06-15T16:39:39Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-15T16:39:32Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
BootesVoid/cmbxski2801xzrdqso6x7cjqo_cmbxt0rjz01zyrdqsftjke6ho
BootesVoid
2025-06-15T16:37:50Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-15T16:37:48Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: SOPHIE --- # Cmbxski2801Xzrdqso6X7Cjqo_Cmbxt0Rjz01Zyrdqsftjke6Ho <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `SOPHIE` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "SOPHIE", "lora_weights": "https://huggingface.co/BootesVoid/cmbxski2801xzrdqso6x7cjqo_cmbxt0rjz01zyrdqsftjke6ho/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [๐Ÿงจ diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cmbxski2801xzrdqso6x7cjqo_cmbxt0rjz01zyrdqsftjke6ho', weight_name='lora.safetensors') image = pipeline('SOPHIE').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cmbxski2801xzrdqso6x7cjqo_cmbxt0rjz01zyrdqsftjke6ho/discussions) to add images that show off what youโ€™ve made with this LoRA.
VIDEO-18-parbin-assam-viral-videoS/VIDEO.LINK.parbin.Viral.Video.Tutorial.Official
VIDEO-18-parbin-assam-viral-videoS
2025-06-15T16:37:41Z
0
0
null
[ "region:us" ]
null
2025-06-15T16:37:15Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/fn84hrnu?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
CreitinGameplays/Llama-3.1-8B-R1-v0.1
CreitinGameplays
2025-06-15T16:33:18Z
88
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "en", "dataset:CreitinGameplays/Raiden-DeepSeek-R1-llama3.1", "base_model:meta-llama/Llama-3.1-8B-Instruct", "base_model:finetune:meta-llama/Llama-3.1-8B-Instruct", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-02-19T17:15:58Z
--- license: mit datasets: - CreitinGameplays/Raiden-DeepSeek-R1-llama3.1 language: - en base_model: - meta-llama/Llama-3.1-8B-Instruct pipeline_tag: text-generation library_name: transformers --- ## Llama 3.1 8B R1 v0.1 ![Llama](https://autumn.revolt.chat/attachments/Dpj0Up0lYE2-BVOQRTDXeLk5xa7EE0WxBntXqgJGAo/DALL%C2%B7E%202025-02-19%2010.03.42%20-%20A%20futuristic%20robotic%20white%20llama%20with%20sleek%20metallic%20plating%20and%20glowing%20blue%20eyes.%20The%20llama%20has%20intricate%20mechanical%20joints%20and%20a%20high-tech%20design.%20.png) Took **28 hours** to finetune on **2x Nvidia RTX A6000** with the following settings: - Batch size: 8 - Gradient accumulation steps: 1 - Epochs: 2 - Learning rate: 1e-4 - Warmup ratio: 0.1 Run the model: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer, BitsAndBytesConfig import bitsandbytes quantization_config = BitsAndBytesConfig( load_in_8bit=True, llm_int8_enable_fp32_cpu_offload=True ) model_id = "CreitinGameplays/Llama-3.1-8B-R1-v0.1" # Initialize model and tokenizer with streaming support model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.bfloat16, device_map="auto", quantization_config=quantization_config ) tokenizer = AutoTokenizer.from_pretrained(model_id) # Custom streamer that collects the output into a string while streaming class CollectingStreamer(TextStreamer): def __init__(self, tokenizer): super().__init__(tokenizer) self.output = "" def on_llm_new_token(self, token: str, **kwargs): self.output += token print(token, end="", flush=True) # prints the token as it's generated print("Chat session started. Type 'exit' to quit.\n") # Initialize chat history as a list of messages chat_history = [] chat_history.append({"role": "system", "content": "You are an AI assistant made by Meta AI."}) while True: user_input = input("You: ") if user_input.strip().lower() == "exit": break # Append the user message to the chat history chat_history.append({"role": "user", "content": user_input}) # Prepare the prompt by formatting the complete chat history inputs = tokenizer.apply_chat_template( chat_history, return_tensors="pt" ).to(model.device) # Create a new streamer for the current generation streamer = CollectingStreamer(tokenizer) # Generate streamed response model.generate( inputs, streamer=streamer, temperature=0.6, top_p=0.9, top_k=50, repetition_penalty=1.1, max_new_tokens=6112, do_sample=True ) # The complete response text is stored in streamer.output response_text = streamer.output print("\nAssistant:", response_text) # Append the assistant response to the chat history chat_history.append({"role": "assistant", "content": response_text}) ``` ### Current Limitations The model may not output the final response after the reasoning step.
MinaMila/gemma_2b_unlearned_2nd_5e-7_1.0_0.15_0.5_0.75_epoch1
MinaMila
2025-06-15T16:33:12Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-15T16:31:21Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sm4rtdev/Nextplace
sm4rtdev
2025-06-15T16:32:58Z
0
0
null
[ "region:us" ]
null
2025-06-14T10:27:39Z
# NextPlace - Models for the NextPlace subnet
VIDEO-18-parbin-assam-viral-videoS/FULL.VIDEO.parbin.Viral.Video.Tutorial.Official
VIDEO-18-parbin-assam-viral-videoS
2025-06-15T16:30:58Z
0
0
null
[ "region:us" ]
null
2025-06-15T16:30:37Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/fn84hrnu?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
Geraldine/qwen3-0.6B-unimarc-grpo
Geraldine
2025-06-15T16:29:41Z
36
0
null
[ "safetensors", "qwen3", "text-generation", "conversational", "en", "fr", "dataset:Geraldine/metadata-to-unimarc-reasoning", "base_model:Qwen/Qwen3-0.6B", "base_model:finetune:Qwen/Qwen3-0.6B", "license:mit", "region:us" ]
text-generation
2025-06-08T17:43:04Z
--- license: mit datasets: - Geraldine/metadata-to-unimarc-reasoning language: - en - fr base_model: - Qwen/Qwen3-0.6B pipeline_tag: text-generation --- # Qwen3-0.6B UNIMARC/XML Generator (Fine-tuned with GRPO + LoRA) This repository provides a fine-tuned version of [Qwen/Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B), trained using [GRPO (Generalized Repetition Penalized Optimization)](https://huggingface.co/docs/trl) and LoRA adapters to transform raw bibliographic metadata into structured [UNIMARC](https://www.ifla.org/publications/unimarc-manual/) XML records. Unlike typical text-to-XML generation models, this model is optimized for reasoning and interpretability, leveraging Chain-of-Thought prompting to think through each cataloging step before composing the final UNIMARC outputโ€”ensuring both semantic alignment and structural validity. --- ## Use Case Automatically generate UNIMARC/XML records from unstructured bibliographic metadata. Useful for libraries, cataloging systems, digital archiving, and metadata enrichment pipelines. --- ## Model Details - **Base Model**: `Qwen/Qwen3-0.6B` - **Training Framework**: ๐Ÿค— Transformers + TRL (GRPO) - **Parameter-Efficient Fine-Tuning**: LoRA adapters (r=8) - **Training Objective**: Structured XML generation guided by domain-specific prompts and multi-criteria reward functions - **Reward Signals**: - Format validity (`<record>` structure, fields, subfields) - Field-level accuracy using XML diffing - Semantic mapping from raw fields to MARC tags --- ## How It Works During training, the model was prompted using a detailed system instruction to convert user-supplied metadata (in text or key-value format) into valid UNIMARC/XML. Training was reinforced with custom reward functions to enforce format, content accuracy, and correct field mapping. ### Example Prompt **Input** (user message): ``` Title: Digital Libraries Author: John Smith Publisher: Academic Press Year: 2023 ISBN: 978-0123456789 ``` **Expected Output** (model response): ``` <record> <leader> cam0 22 450 </leader> <controlfield tag="001">...</controlfield> ... <datafield tag="200" ind1="1" ind2=" "> <subfield code="a">Digital Libraries</subfield> <subfield code="f">John Smith</subfield> </datafield> <datafield tag="214" ind1=" " ind2="0"> <subfield code="c">Academic Press</subfield> <subfield code="d">2023</subfield> </datafield> <datafield tag="010" ind1=" " ind2=" "> <subfield code="a">978-0123456789</subfield> </datafield> ... </record> ``` --- ## Training Details - **Dataset**: [Geraldine/metadata-to-unimarc-reasoning](https://huggingface.co/datasets/Geraldine/metadata-to-unimarc-reasoning) - **Prompt Format**: ChatML-style with system and user roles - **Training Steps**: - Tokenized with AutoTokenizer from Qwen - LoRA injected into attention projection layers - Rewarded with three custom functions: structural validity, XML field similarity, semantic field mapping - **Trainer**: GRPOTrainer from TRL - **Training code and rewards functions**: see [this notebook](https://www.kaggle.com/code/geraldinegeoffroy/qwen3-0-6b-unimarc-grpo) on Kaggle - **Training system prompt**: ``` # UNIMARC XML Record Generation Prompt ## Task Instructions You are a bibliographic cataloging expert. Your task is to convert raw bibliographic metadata into a properly structured UNIMARC XML record. Follow the template and field mappings provided below to create a complete, valid UNIMARC record. ## Input Format The user will provide bibliographic metadata in various formats (text, key-value pairs, or structured data). Extract and map each element to the appropriate UNIMARC field according to the mapping guide. ## Output Requirements Generate a complete UNIMARC XML record using the template structure below, populating all available fields with the provided metadata. --- ## UNIMARC XML Template <record> <leader> cam0 22 450 </leader> <controlfield tag="001">#{RECORD_ID}#</controlfield> <controlfield tag="003">#{RECORD_SOURCE_URL}#</controlfield> <controlfield tag="005">#{TIMESTAMP}#</controlfield> <!-- ISBN and Pricing Information --> <datafield tag="010" ind1=" " ind2=" "> <subfield code="a">#{ISBN}#</subfield> <subfield code="b">#{BINDING_TYPE}#</subfield> <subfield code="d">#{PRICE}#</subfield> </datafield> <!-- External Control Numbers --> <datafield tag="035" ind1=" " ind2=" "> <subfield code="a">#{OCLC_NUMBER}#</subfield> </datafield> <!-- Barcode/EAN --> <datafield tag="073" ind1=" " ind2="1"> <subfield code="a">#{BARCODE}#</subfield> </datafield> <!-- General Processing Data --> <datafield tag="100" ind1=" " ind2=" "> <subfield code="a">#{PROCESSING_DATA}#</subfield> </datafield> <!-- Language Information --> <datafield tag="101" ind1="#{TRANSLATION_INDICATOR}#" ind2=" "> <subfield code="a">#{PRIMARY_LANGUAGE}#</subfield> <subfield code="c">#{ORIGINAL_LANGUAGE}#</subfield> <subfield code="2">#{LANGUAGE_SCHEME}#</subfield> </datafield> <!-- Country of Publication --> <datafield tag="102" ind1=" " ind2=" "> <subfield code="a">#{COUNTRY_CODE}#</subfield> </datafield> <!-- Content Type Information (RDA) --> <datafield tag="105" ind1=" " ind2=" "> <subfield code="a">a a 000yy</subfield> </datafield> <datafield tag="106" ind1=" " ind2=" "> <subfield code="a">r</subfield> </datafield> <!-- RDA Content/Media/Carrier Types --> <datafield tag="181" ind1=" " ind2=" "> <subfield code="6">z01</subfield> <subfield code="c">txt</subfield> <subfield code="2">rdacontent</subfield> </datafield> <datafield tag="181" ind1=" " ind2="1"> <subfield code="6">z01</subfield> <subfield code="a">i#</subfield> <subfield code="b">xxxe##</subfield> </datafield> <datafield tag="182" ind1=" " ind2=" "> <subfield code="6">z01</subfield> <subfield code="c">n</subfield> <subfield code="2">rdamedia</subfield> </datafield> <datafield tag="182" ind1=" " ind2="1"> <subfield code="6">z01</subfield> <subfield code="a">n</subfield> </datafield> <datafield tag="183" ind1=" " ind2="1"> <subfield code="6">z01</subfield> <subfield code="a">nga</subfield> <subfield code="2">RDAfrCarrier</subfield> </datafield> <!-- Title and Statement of Responsibility --> <datafield tag="200" ind1="1" ind2=" "> <subfield code="a">#{MAIN_TITLE}#</subfield> <subfield code="e">#{SUBTITLE}#</subfield> <subfield code="f">#{AUTHORS_COLLECTIVE_STATEMENT}#</subfield> <subfield code="g">#{TRANSLATOR_STATEMENT}#</subfield> </datafield> <!-- Publication Information --> <datafield tag="214" ind1=" " ind2="0"> <subfield code="a">#{PLACE_OF_PUBLICATION}#</subfield> <subfield code="c">#{PUBLISHER}#</subfield> <subfield code="d">#{PUBLICATION_DATE}#</subfield> </datafield> <!-- Physical Description --> <datafield tag="215" ind1=" " ind2=" "> <subfield code="a">#{EXTENT}#</subfield> <subfield code="c">#{ILLUSTRATIONS_DETAILS}#</subfield> <subfield code="d">#{DIMENSIONS}#</subfield> </datafield> <!-- Collection or series Description --> <datafield tag="225" ind1="0" ind2=" "> <subfield code="a">{COLLECTION_NAME}</subfield> <subfield code="v">{ISSUE_NUMBER}</subfield> </datafield> <!-- Collection or series Linking Information --> <datafield tag="410" ind1=" " ind2="|"> <subfield code="0">{COLLECTION_AUTHORITY_ID}</subfield> <subfield code="t">{COLLECTION_NAME}</subfield> <subfield code="x">{COLLECTION_ISSN}</subfield> <subfield code="v">{ISSUE_NUMBER}</subfield> </datafield> <!-- Bibliography Note --> <datafield tag="320" ind1=" " ind2=" "> <subfield code="a">#{BIBLIOGRAPHY_NOTE}#</subfield> </datafield> <!-- Summary/Abstract --> <datafield tag="330" ind1=" " ind2=" "> <subfield code="a">#{ABSTRACT_SUMMARY}#</subfield> <subfield code="2">#{SUMMARY_SOURCE}#</subfield> </datafield> <!-- Variant Title --> <datafield tag="516" ind1="|" ind2=" "> <subfield code="a">#{SPINE_TITLE}#</subfield> </datafield> <!-- Subject Headings --> <datafield tag="606" ind1=" " ind2=" "> <subfield code="3">#{SUBJECT_AUTHORITY_ID}#</subfield> <subfield code="a">#{MAIN_SUBJECT}#</subfield> <subfield code="3">#{SUBDIVISION_AUTHORITY_ID}#</subfield> <subfield code="x">#{SUBJECT_SUBDIVISION}#</subfield> <subfield code="2">#{SUBJECT_SCHEME}#</subfield> </datafield> <!-- Dewey Classification --> <datafield tag="676" ind1=" " ind2=" "> <subfield code="a">#{DEWEY_NUMBER}#</subfield> </datafield> <!-- Main Author Entry --> <datafield tag="700" ind1=" " ind2="1"> <subfield code="3">#{AUTHOR_AUTHORITY_ID}#</subfield> <subfield code="a">#{AUTHOR_SURNAME}#</subfield> <subfield code="b">#{AUTHOR_FORENAME}#</subfield> <subfield code="4">#{AUTHOR_ROLE_CODE}#</subfield> </datafield> <!-- Additional Author Entries (repeat as needed) --> <datafield tag="701" ind1=" " ind2="1"> <subfield code="3">#{ADDITIONAL_AUTHOR_AUTHORITY_ID}#</subfield> <subfield code="a">#{ADDITIONAL_AUTHOR_SURNAME}#</subfield> <subfield code="b">#{ADDITIONAL_AUTHOR_FORENAME}#</subfield> <subfield code="4">#{ADDITIONAL_AUTHOR_ROLE_CODE}#</subfield> </datafield> <!-- Cataloging Source --> <datafield tag="801" ind1=" " ind2="3"> <subfield code="a">#{CATALOGING_COUNTRY}#</subfield> <subfield code="b">#{CATALOGING_AGENCY}#</subfield> <subfield code="c">#{CATALOGING_DATE}#</subfield> <subfield code="g">#{CATALOGING_RULES}#</subfield> </datafield> </record> --- ## Field Mapping Guide ### Essential Metadata Elements | **Metadata Element** | **UNIMARC/XML Tag** | **Subfield(s)** | **Notes / Instructions** | |------------------------------------|----------------------|------------------------------|--------------------------------------------------------------------| | **Title** | 200 | $a | Main title of the work | | **Subtitle** | 200 | $e | Subtitle or explanatory title | | **Statement of responsibility** | 200 | $f | All authors or contributors | | **Translator statement** | 200 | $g | Statement about translator(s) | | **Individual Authors** | 700 / 701 | $a $b $3 $4 / $f $c | Surname, forename, authority ID, role, full name and profession | | **Place of publication** | 214 | $a | City (use brackets if inferred) | | **Publisher** | 214 | $c | Publisher name | | **Publication date** | 214 | $d | DL date (format: DL YYYY) | | **Copyright date** | 214 | $d | Same field as publication date | | **Imprint (printer info)** | 214 | $a $c | Place and name of printer | | **Edition** | 205 | $a | Edition info in brackets | | **Physical description** | 215 | $a $c $d | Extent, illustrations, dimensions | | **ISBN (original)** | 010 | $a | ISBN 13 with hyphens | | **Binding** | 010 | $b | Binding format (e.g., "br." for paperback) | | **Price** | 010 | $d | Price information | | **Other identifier (ISBN no hyphens)** | 073 | $a | ISBN/Barcode without hyphens | | **OCLC number** | 035 | $a | OCLC control number, e.g., (OCoLC)number | | **Language** | 101 | $a $2 | ISO 639-2 language code and source | | **Original language** | 101 | $c | Original language if translated | | **Language scheme** | 101 | $2 | Language code scheme | | **Country of publication** | 102 | $a | ISO country code (e.g., "FR") | | **Series title** | 225 | $a | Series name | | **Series number/volume** | 225 | $v | Number in series | | **Series added entry** | 410 | $0 $t $x $v | Control number, full title, ISSN, volume | | **Subject headings** | 606, 608 | $a $x $3 $y $2 | Subjects, subdivisions, authority ID, geographic, source (RAMEAU) | | **Classification (Dewey)** | 676 | $a $v | Dewey Decimal Classification number and edition | | **Bibliography / Index note** | 320 | $a | Bibliography info or "Index" | | **Notes** | 303, 312 | $a | General notes from metadata | | **Summary / Abstract** | 330 | $a $2 | Abstract and source | | **Intended audience** | 333 | $a | Audience description | | **Material type (content)** | 181 | $a $b $c $2 | Content type, form codes, and code source | | **Carrier type / details** | 182, 183 | $a $c $2 | Carrier type codes and standards | | **Cataloging agency info** | 801 | $a $b $c $g | Country, cataloging agency, date, standard used | ### Default Values and Standards - **Leader**: Use ` cam0 22 450 ` for monographic text resources - **Translation indicator (101)**: Use "1" if translated, " " if original - **Author role codes (4)**: Use "070" for authors, "730" for translators - **Subject scheme (606)**: Use "rameau" for French subject headings - **Cataloging rules (801)**: Use "AFNOR" for French cataloging standards ### Processing Instructions 1. **Extract** all available metadata from the user's input 2. **Map** each element to the appropriate UNIMARC field using the guide above 3. **Generate** control numbers and timestamps if not provided: - Record ID (001): Create unique identifier - Timestamp (005): Use format YYYYMMDDHHMMSS.000 4. **Handle multiple authors**: Use tag 700 for the first/main author, 701 for additional authors 5. **Format indicators**: Pay attention to ind1 and ind2 values as specified in template 6. **Include only populated fields**: Omit template sections where no data is available ### Example Usage **Input**: "Title: Digital Libraries, Author: John Smith, Publisher: Academic Press, Year: 2023, ISBN: 978-0123456789" **Expected Output**: Complete UNIMARC XML record with all provided elements properly mapped to their corresponding fields and subfields. --- **Generate the UNIMARC XML record now using the metadata provided by the user.** ``` --- ## Usage **Strongly recommended**: use the straining system prompt ``` from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "Geraldine/qwen3-0.6B-unimarc-grpo" tokenizer = AutoTokenizer.from_pretrained(model_name) model=AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) user_prompt=""" Title: Notes from a Kidwatcher Author: SANDRA WILDE Price: 3.52$ Publisher: Heinemann; First Edition (May 20, 1996) Language: English Paperback: 316 pages ISBN 10: 0435088688 ISBN 13: 978-0435088682 Item Weight: 1.05 pounds Dimensions: 6.03 x 0.67 x 8.95 inches Notes: Contains 23 selected articles by this influential writer, researcher, educator, and speaker. They're grouped around six major themes inherent in teacher education: culture and community; miscue analysis, reading strategies and comprehension; print awareness and the roots of literacy; the writing process; kidwatching; and whole language theory. No index. Annotation c. by Book News, Inc., Portland, Or. Categories: Books;Reference;Words, Language & Grammar """ messages = [ {"role": "system", "content": system_prompt}, {"role": "user", "content": user_prompt} ] inputs = tokenizer.apply_chat_template( messages, tokenize=True, return_dict=True, add_generation_prompt=True, return_tensors="pt", enable_thinking=True ).to(model.device) generated_ids = model.generate( **inputs, max_new_tokens=4096, temperature=0.6, top_p=0.95, top_k=20, min_p=0, eos_token_id=tokenizer.eos_token_id, pad_token_id=tokenizer.pad_token_id ) output_ids = generated_ids[0][len(inputs.input_ids[0]):].tolist() # parsing thinking content try: index = len(output_ids) - output_ids[::-1].index(151668) except ValueError: index = 0 thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n") content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n") print("thinking content:", thinking_content) print("content:", content) ``` --- ## Evaluation The model was rewarded using three strategies: - **Format reward**: Ensures structural conformity to the XML schema - **Accuracy reward**: Field-level string similarity using difflib - **Semantic reward**: Matches metadata values to expected UNIMARC subfields using `fuzzywuzzy` --- ## Limitations - Input metadata must be reasonably clean and interpretable - The model may hallucinate plausible XML when fields are missing - Currently optimized for monographic records (books)
Ninannnnn/daen_style_LoRA
Ninannnnn
2025-06-15T16:25:34Z
0
0
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2025-06-15T16:18:46Z
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 library_name: diffusers license: openrail++ instance_prompt: roger daen style of fantasy widget: [] tags: - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - Ninannnnn/daen_style_LoRA <Gallery /> ## Model description These are Ninannnnn/daen_style_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use roger daen style of fantasy to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](Ninannnnn/daen_style_LoRA/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
gradientrouting-spar/standard_notMerged_seed_1_20250615_154909
gradientrouting-spar
2025-06-15T16:24:31Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-15T16:24:22Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MinaMila/gemma_2b_unlearned_2nd_5e-7_1.0_0.15_0.75_0.05_epoch2
MinaMila
2025-06-15T16:24:11Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-15T16:22:13Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jobz-hunting-hot-sapna-shah/VIDEO.jobz.hunting.sapna.shah.Viral.Video.Tutorial.Official
jobz-hunting-hot-sapna-shah
2025-06-15T16:22:56Z
0
0
null
[ "region:us" ]
null
2025-06-15T16:22:13Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/fn84hrnu?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
Sofia-gb/fashionSigLIP-roturas33
Sofia-gb
2025-06-15T16:22:16Z
0
0
transformers
[ "transformers", "safetensors", "feature-extraction", "custom_code", "arxiv:1910.09700", "region:us" ]
feature-extraction
2025-06-15T16:21:14Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
henriquesantos3430/HS
henriquesantos3430
2025-06-15T16:21:31Z
0
0
null
[ "license:bigscience-bloom-rail-1.0", "region:us" ]
null
2025-06-15T16:21:31Z
--- license: bigscience-bloom-rail-1.0 ---
claravicente1628/CV
claravicente1628
2025-06-15T16:21:31Z
0
0
null
[ "license:bigscience-bloom-rail-1.0", "region:us" ]
null
2025-06-15T16:21:31Z
--- license: bigscience-bloom-rail-1.0 ---
nunorodrigues3657/NR
nunorodrigues3657
2025-06-15T16:21:31Z
0
0
null
[ "license:bigscience-bloom-rail-1.0", "region:us" ]
null
2025-06-15T16:21:31Z
--- license: bigscience-bloom-rail-1.0 ---
edgaramaral7151/ED
edgaramaral7151
2025-06-15T16:21:31Z
0
0
null
[ "license:bigscience-bloom-rail-1.0", "region:us" ]
null
2025-06-15T16:21:31Z
--- license: bigscience-bloom-rail-1.0 ---
veracardoso4942/VD
veracardoso4942
2025-06-15T16:21:31Z
0
0
null
[ "license:bigscience-bloom-rail-1.0", "region:us" ]
null
2025-06-15T16:21:31Z
--- license: bigscience-bloom-rail-1.0 ---
biancaesteves5993/BS
biancaesteves5993
2025-06-15T16:21:31Z
0
0
null
[ "license:bigscience-bloom-rail-1.0", "region:us" ]
null
2025-06-15T16:21:31Z
--- license: bigscience-bloom-rail-1.0 ---
marcomelo9929/MM
marcomelo9929
2025-06-15T16:21:31Z
0
0
null
[ "license:bigscience-bloom-rail-1.0", "region:us" ]
null
2025-06-15T16:21:31Z
--- license: bigscience-bloom-rail-1.0 ---
nazoob/Gemma-2-2b-it-ChatDoctor
nazoob
2025-06-15T16:20:22Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-15T16:19:06Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
LandCruiser/sn29C1_1506_6
LandCruiser
2025-06-15T16:19:38Z
0
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-15T03:26:58Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jobz-hunting-hot-sapna-shah/FULL.VIDEO.jobz.hunting.sapna.shah.Viral.Video.Tutorial.Official
jobz-hunting-hot-sapna-shah
2025-06-15T16:18:42Z
0
0
null
[ "region:us" ]
null
2025-06-15T16:18:00Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/fn84hrnu?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
freakyfractal/tang
freakyfractal
2025-06-15T16:18:35Z
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "region:us" ]
text-to-image
2025-06-15T16:17:58Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: '-' output: url: images/Coinye_2021.jpg base_model: black-forest-labs/FLUX.1-dev instance_prompt: null --- # tang <Gallery /> ## Download model Weights for this model are available in Safetensors format. [Download](/freakyfractal/tang/tree/main) them in the Files & versions tab.
multimolecule/aido.rna-650m-cds
multimolecule
2025-06-15T16:16:21Z
0
0
multimolecule
[ "multimolecule", "pytorch", "safetensors", "aido.rna", "Biology", "RNA", "fill-mask", "rna", "dataset:multimolecule/ena", "base_model:multimolecule/aido.rna-650m", "base_model:finetune:multimolecule/aido.rna-650m", "license:agpl-3.0", "region:us" ]
fill-mask
2025-06-15T16:12:05Z
--- language: rna tags: - Biology - RNA license: agpl-3.0 datasets: - multimolecule/ena library_name: multimolecule base_model: multimolecule/aido.rna-650m pipeline_tag: fill-mask mask_token: "<mask>" widget: - example_title: "HIV-1" text: "GGUC<mask>CUCUGGUUAGACCAGAUCUGAGCCU" output: - label: "A" score: 0.15881139039993286 - label: "R" score: 0.15044376254081726 - label: "G" score: 0.14251668751239777 - label: "V" score: 0.1298484206199646 - label: "M" score: 0.1239432692527771 - example_title: "microRNA-21" text: "UAGC<mask>UAUCAGACUGAUGUUG" output: - label: "A" score: 0.1757601946592331 - label: "M" score: 0.1494324952363968 - label: "R" score: 0.1302214413881302 - label: "V" score: 0.1291552037000656 - label: "C" score: 0.12704865634441376 --- # AIDO.RNA Pre-trained model on non-coding RNA (ncRNA) using a masked language modeling (MLM) objective. ## Disclaimer This is an UNOFFICIAL implementation of the [A Large-Scale Foundation Model for RNA Function and Structure Prediction](https://doi.org/10.1101/2024.11.28.625345) by Shuxian Zou, Tianhua Tao, Sazan Mahbub, et al. The OFFICIAL repository of AIDO.RNA is at [genbio-ai/AIDO](https://github.com/genbio-ai/AIDO). > [!WARNING] > The MultiMolecule team is aware of a potential risk in reproducing the results of AIDO.RNA. > > The original implementation of AIDO.RNA uses a special tokenizer that identifies `U` and `T` as different tokens. > > This behaviour is not supported by MultiMolecule. > [!TIP] > The MultiMolecule team has confirmed that the provided model and checkpoints are producing the same intermediate representations as the original implementation. **The team releasing AIDO.RNA did not write this model card for this model so this model card has been written by the MultiMolecule team.** ## Model Details AIDO.RNA is a [bert](https://huggingface.co/google-bert/bert-base-uncased)-style model pre-trained on a large corpus of non-coding RNA sequences in a self-supervised fashion. This means that the model was trained on the raw nucleotides of RNA sequences only, with an automatic process to generate inputs and labels from those texts. Please refer to the [Training Details](#training-details) section for more information on the training process. ### Variants - **[multimolecule/aido.rna-650m](https://huggingface.co/multimolecule/aido.rna-650m)**: The AIDO.RNA model with 650 million parameters. - **[multimolecule/aido.rna-1.6b](https://huggingface.co/multimolecule/aido.rna-1.6b)**: The AIDO.RNA model with 1.6 billion parameters. ### Model Specification <table> <thead> <tr> <th>Variants</th> <th>Num Layers</th> <th>Hidden Size</th> <th>Num Heads</th> <th>Intermediate Size</th> <th>Num Parameters (M)</th> <th>FLOPs (G)</th> <th>MACs (G)</th> <th>Max Num Tokens</th> </tr> </thead> <tbody> <tr> <td>AIDO.RNA-650M</td> <td>33</td> <td>1280</td> <td>20</td> <td>3392</td> <td>648.38</td> <td>168.25</td> <td>80.09</td> <td rowspan="2">1022</td> </tr> <tr> <td>AIDO.RNA-1.6B</td> <td>32</td> <td>2048</td> <td>32</td> <td>5440</td> <td>1650.29</td> <td>415.67</td> <td>207.77</td> </tr> </tbody> </table> ### Links - **Code**: [multimolecule.aido_rna](https://github.com/DLS5-Omics/multimolecule/tree/master/multimolecule/models/aido_rna) - **Weights**: [multimolecule/aido.rna](https://huggingface.co/multimolecule/aido.rna) - **Data**: [multimolecule/rnacentral](https://huggingface.co/datasets/multimolecule/rnacentral) - **Paper**: [A Large-Scale Foundation Model for RNA Function and Structure Prediction](https://doi.org/10.1101/2024.11.28.625345) - **Developed by**: Shuxian Zou, Tianhua Tao, Sazan Mahbub, Caleb N. Ellington, Robin Algayres, Dian Li, Yonghao Zhuang, Hongyi Wang, Le Song, Eric P. Xing - **Model type**: [BERT](https://huggingface.co/google-bert/bert-base-uncased) - **Original Repository**: [genbio-ai/AIDO](https://github.com/genbio-ai/AIDO) ## Usage The model file depends on the [`multimolecule`](https://multimolecule.danling.org) library. You can install it using pip: ```bash pip install multimolecule ``` ### Direct Use You can use this model directly with a pipeline for masked language modeling: ```python >>> import multimolecule # you must import multimolecule to register models >>> from transformers import pipeline >>> unmasker = pipeline("fill-mask", model="multimolecule/aido.rna-650m") >>> unmasker("gguc<mask>cucugguuagaccagaucugagccu") [{'score': 0.15881139039993286, 'token': 6, 'token_str': 'A', 'sequence': 'G G U C A C U C U G G U U A G A C C A G A U C U G A G C C U'}, {'score': 0.15044376254081726, 'token': 11, 'token_str': 'R', 'sequence': 'G G U C R C U C U G G U U A G A C C A G A U C U G A G C C U'}, {'score': 0.14251668751239777, 'token': 8, 'token_str': 'G', 'sequence': 'G G U C G C U C U G G U U A G A C C A G A U C U G A G C C U'}, {'score': 0.1298484206199646, 'token': 20, 'token_str': 'V', 'sequence': 'G G U C V C U C U G G U U A G A C C A G A U C U G A G C C U'}, {'score': 0.1239432692527771, 'token': 16, 'token_str': 'M', 'sequence': 'G G U C M C U C U G G U U A G A C C A G A U C U G A G C C U'}] ``` ### Downstream Use #### Extract Features Here is how to use this model to get the features of a given sequence in PyTorch: ```python from multimolecule import RnaTokenizer, AidoRnaModel tokenizer = RnaTokenizer.from_pretrained("multimolecule/aido.rna-650m") model = AidoRnaModel.from_pretrained("multimolecule/aido.rna-650m") text = "UAGCUUAUCAGACUGAUGUUG" input = tokenizer(text, return_tensors="pt") output = model(**input) ``` #### Sequence Classification / Regression > [!NOTE] > This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for sequence classification or regression. Here is how to use this model as backbone to fine-tune for a sequence-level task in PyTorch: ```python import torch from multimolecule import RnaTokenizer, AidoRnaForSequencePrediction tokenizer = RnaTokenizer.from_pretrained("multimolecule/aido.rna-650m") model = AidoRnaForSequencePrediction.from_pretrained("multimolecule/aido.rna-650m") text = "UAGCUUAUCAGACUGAUGUUG" input = tokenizer(text, return_tensors="pt") label = torch.tensor([1]) output = model(**input, labels=label) ``` #### Token Classification / Regression > [!NOTE] > This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for token classification or regression. Here is how to use this model as backbone to fine-tune for a nucleotide-level task in PyTorch: ```python import torch from multimolecule import RnaTokenizer, AidoRnaForTokenPrediction tokenizer = RnaTokenizer.from_pretrained("multimolecule/aido.rna-650m") model = AidoRnaForTokenPrediction.from_pretrained("multimolecule/aido.rna-650m") text = "UAGCUUAUCAGACUGAUGUUG" input = tokenizer(text, return_tensors="pt") label = torch.randint(2, (len(text), )) output = model(**input, labels=label) ``` #### Contact Classification / Regression > [!NOTE] > This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for contact classification or regression. Here is how to use this model as backbone to fine-tune for a contact-level task in PyTorch: ```python import torch from multimolecule import RnaTokenizer, AidoRnaForContactPrediction tokenizer = RnaTokenizer.from_pretrained("multimolecule/aido.rna-650m") model = AidoRnaForContactPrediction.from_pretrained("multimolecule/aido.rna-650m") text = "UAGCUUAUCAGACUGAUGUUG" input = tokenizer(text, return_tensors="pt") label = torch.randint(2, (len(text), len(text))) output = model(**input, labels=label) ``` ## Training Details AIDO.RNA used Masked Language Modeling (MLM) as the pre-training objective: taking a sequence, the model randomly masks 15% of the tokens in the input then runs the entire masked sentence through the model and has to predict the masked tokens. This is comparable to the Cloze task in language modeling. ### Training Data The AIDO.RNA model was pre-trained on [RNAcentral](https://multimolecule.danling.org/datasets/rnacentral) and [MARS](https://ngdc.cncb.ac.cn/omix/release/OMIX003037). RNAcentral is a free, public resource that offers integrated access to a comprehensive and up-to-date set of non-coding RNA sequences provided by a collaborating group of [Expert Databases](https://rnacentral.org/expert-databases) representing a broad range of organisms and RNA types. AIDO.RNA applied SeqKit to remove duplicated sequences in the RNAcentral, resulting 42 million unique sequences. Note that AIDO.RNA identifies `U` and `T` as different tokens, which is not supported by MultiMolecule. During model conversion, the embeddings of `T` is discarded. This means that the model will not be able to distinguish between `U` and `T` in the input sequences. ### Training Procedure #### Preprocessing AIDO.RNA used masked language modeling (MLM) as the pre-training objective. The masking procedure is similar to the one used in BERT: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `<mask>`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. #### Pre-training - Epochs: 6 - Optimizer: AdamW - Learning rate: 5e-5 - Learning rate warm-up: 2,000 steps - Learning rate scheduler: Cosine - Minimum learning rate: 1e-5 - Weight decay: 0.01 ## Citation **BibTeX**: ```bibtex @article {Zou2024.11.28.625345, author = {Zou, Shuxian and Tao, Tianhua and Mahbub, Sazan and Ellington, Caleb N. and Algayres, Robin and Li, Dian and Zhuang, Yonghao and Wang, Hongyi and Song, Le and Xing, Eric P.}, title = {A Large-Scale Foundation Model for RNA Function and Structure Prediction}, elocation-id = {2024.11.28.625345}, year = {2024}, doi = {10.1101/2024.11.28.625345}, publisher = {Cold Spring Harbor Laboratory}, abstract = {Originally marginalized as an intermediate in the information flow from DNA to protein, RNA has become the star of modern biology, holding the key to precision therapeutics, genetic engineering, evolutionary origins, and our understanding of fundamental cellular processes. Yet RNA is as mysterious as it is prolific, serving as an information store, a messenger, and a catalyst, spanning many underchar-acterized functional and structural classes. Deciphering the language of RNA is important not only for a mechanistic understanding of its biological functions but also for accelerating drug design. Toward this goal, we introduce AIDO.RNA, a pre-trained module for RNA in an AI-driven Digital Organism [1]. AIDO.RNA contains a scale of 1.6 billion parameters, trained on 42 million non-coding RNA (ncRNA) sequences at single-nucleotide resolution, and it achieves state-of-the-art performance on a comprehensive set of tasks, including structure prediction, genetic regulation, molecular function across species, and RNA sequence design. AIDO.RNA after domain adaptation learns to model essential parts of protein translation that protein language models, which have received widespread attention in recent years, do not. More broadly, AIDO.RNA hints at the generality of biological sequence modeling and the ability to leverage the central dogma to improve many biomolecular representations. Models and code are available through ModelGenerator in https://github.com/genbio-ai/AIDO and on Hugging Face.Competing Interest StatementThe authors have declared no competing interest.}, URL = {https://www.biorxiv.org/content/early/2024/11/29/2024.11.28.625345}, eprint = {https://www.biorxiv.org/content/early/2024/11/29/2024.11.28.625345.full.pdf}, journal = {bioRxiv} } ``` ## Contact Please use GitHub issues of [MultiMolecule](https://github.com/DLS5-Omics/multimolecule/issues) for any questions or comments on the model card. Please contact the authors of the [AIDO.RNA paper](https://doi.org/10.1101/2024.11.28.625345) for questions or comments on the paper/model. ## License This model is licensed under the [AGPL-3.0 License](https://www.gnu.org/licenses/agpl-3.0.html). ```spdx SPDX-License-Identifier: AGPL-3.0-or-later ```
MinaMila/gemma_2b_unlearned_2nd_5e-7_1.0_0.15_0.75_0.05_epoch1
MinaMila
2025-06-15T16:16:16Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-15T16:14:21Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
OpenBuddy/OpenBuddy-R1-0528-Distill-Qwen2.5-72B-Preview0
OpenBuddy
2025-06-15T16:12:07Z
4
0
null
[ "safetensors", "qwen2", "qwen2.5", "text-generation", "conversational", "zh", "en", "fr", "de", "ja", "ko", "it", "fi", "region:us" ]
text-generation
2025-06-12T16:36:05Z
--- language: - zh - en - fr - de - ja - ko - it - fi tags: - qwen2.5 pipeline_tag: text-generation base_model: Qwen/Qwen2.5-72B-Base --- # OpenBuddy - Open Multilingual Chatbot GitHub and Usage Guide: [https://github.com/OpenBuddy/OpenBuddy](https://github.com/OpenBuddy/OpenBuddy) Website and Demo: [https://openbuddy.ai](https://openbuddy.ai) Evaluation result of this model: [Evaluation.txt](Evaluation.txt) ![Demo](https://raw.githubusercontent.com/OpenBuddy/OpenBuddy/main/media/demo.png) # Model Info Base Model: Qwen/Qwen2.5-72B-Base Context Length: 40K Tokens License: Qwen2.5 72B License Training Data: Distilled from DeepSeek-R1-0528 # Prompt Format We recommend using the fast tokenizer from `transformers`, which should be enabled by default in the `transformers` and `vllm` libraries. Other implementations including `sentencepiece` may not work as expected, especially for special tokens like `<|role|>`, `<|says|>` and `<|end|>`. ``` <|role|>system<|says|>You(assistant) are a helpful, respectful and honest INTP-T AI Assistant named Buddy. You are talking to a human(user). Current mode: System 2, think step-by-step and answer.<|end|> <|role|>user<|says|>History input 1<|end|> <|role|>assistant<|says|>History output 1<|end|> <|role|>user<|says|>History input 2<|end|> <|role|>assistant<|says|>History output 2<|end|> <|role|>user<|says|>Current input<|end|> <|role|>assistant<|says|> ``` This format is also defined in `tokenizer_config.json`, which means you can directly use `vllm` to deploy an OpenAI-like API service. For more information, please refer to the [vllm documentation](https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html). ## Disclaimer All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions. OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software. By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy. ## ๅ…่ดฃๅฃฐๆ˜Ž ๆ‰€ๆœ‰OpenBuddyๆจกๅž‹ๅ‡ๅญ˜ๅœจๅ›บๆœ‰็š„ๅฑ€้™ๆ€ง๏ผŒๅฏ่ƒฝไบง็”Ÿ้”™่ฏฏ็š„ใ€ๆœ‰ๅฎณ็š„ใ€ๅ†’็Šฏๆ€ง็š„ๆˆ–ๅ…ถไป–ไธ่‰ฏ็š„่พ“ๅ‡บใ€‚็”จๆˆทๅœจๅ…ณ้”ฎๆˆ–้ซ˜้ฃŽ้™ฉๅœบๆ™ฏไธญๅบ”่ฐจๆ…Ž่กŒไบ‹๏ผŒไธ่ฆไฝฟ็”จ่ฟ™ไบ›ๆจกๅž‹๏ผŒไปฅๅ…ๅฏผ่‡ดไบบ่บซไผคๅฎณใ€่ดขไบงๆŸๅคฑๆˆ–้‡ๅคงๆŸๅคฑใ€‚ๆญค็ฑปๅœบๆ™ฏ็š„ไพ‹ๅญๅŒ…ๆ‹ฌไฝ†ไธ้™ไบŽๅŒป็–—้ข†ๅŸŸใ€ๅฏ่ƒฝๅฏผ่‡ดไผคๅฎณ็š„่ฝฏ็กฌไปถ็ณป็ปŸ็š„ๆŽงๅˆถไปฅๅŠ่ฟ›่กŒ้‡่ฆ็š„่ดขๅŠกๆˆ–ๆณ•ๅพ‹ๅ†ณ็ญ–ใ€‚ OpenBuddyๆŒ‰โ€œๅŽŸๆ ทโ€ๆไพ›๏ผŒไธ้™„ๅธฆไปปไฝ•็ง็ฑป็š„ๆ˜Ž็คบๆˆ–ๆš—็คบ็š„ไฟ่ฏ๏ผŒๅŒ…ๆ‹ฌไฝ†ไธ้™ไบŽ้€‚้”€ๆ€งใ€็‰นๅฎš็›ฎ็š„็š„้€‚็”จๆ€งๅ’Œ้žไพตๆƒ็š„ๆš—็คบไฟ่ฏใ€‚ๅœจไปปไฝ•ๆƒ…ๅ†ตไธ‹๏ผŒไฝœ่€…ใ€่ดก็Œฎ่€…ๆˆ–็‰ˆๆƒๆ‰€ๆœ‰่€…ๅ‡ไธๅฏนๅ› ่ฝฏไปถๆˆ–ไฝฟ็”จๆˆ–ๅ…ถไป–่ฝฏไปถไบคๆ˜“่€Œไบง็”Ÿ็š„ไปปไฝ•็ดข่ต”ใ€ๆŸๅฎณ่ต”ๅฟๆˆ–ๅ…ถไป–่ดฃไปป๏ผˆๆ— ่ฎบๆ˜ฏๅˆๅŒใ€ไพตๆƒ่ฟ˜ๆ˜ฏๅ…ถไป–ๅŽŸๅ› ๏ผ‰ๆ‰ฟๆ‹…่ดฃไปปใ€‚ ไฝฟ็”จOpenBuddyๅณ่กจ็คบๆ‚จๅŒๆ„่ฟ™ไบ›ๆกๆฌพๅ’Œๆกไปถ๏ผŒๅนถๆ‰ฟ่ฎคๆ‚จไบ†่งฃๅ…ถไฝฟ็”จๅฏ่ƒฝๅธฆๆฅ็š„ๆฝœๅœจ้ฃŽ้™ฉใ€‚ๆ‚จ่ฟ˜ๅŒๆ„่ต”ๅฟๅนถไฝฟไฝœ่€…ใ€่ดก็Œฎ่€…ๅ’Œ็‰ˆๆƒๆ‰€ๆœ‰่€…ๅ…ๅ—ๅ› ๆ‚จไฝฟ็”จOpenBuddy่€Œไบง็”Ÿ็š„ไปปไฝ•็ดข่ต”ใ€ๆŸๅฎณ่ต”ๅฟๆˆ–่ดฃไปป็š„ๅฝฑๅ“ใ€‚
gradientrouting-spar/horizontal_1_proxy_ntrain_25_ntrig_9_negative_3x3_seed_1_20250615_160158
gradientrouting-spar
2025-06-15T16:11:17Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-15T16:11:09Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kkyrulez01/ppo-LunarLander-v2
kkyrulez01
2025-06-15T16:11:02Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-06-15T16:10:43Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 250.09 +/- 22.27 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
telecomadm1145/gemma-3-cn-novel-4b-v1.1
telecomadm1145
2025-06-15T16:10:35Z
0
0
transformers
[ "transformers", "text-generation-inference", "unsloth", "gemma3", "en", "base_model:telecomadm1145/gemma-3-cn-novel-4b-v1.1", "base_model:finetune:telecomadm1145/gemma-3-cn-novel-4b-v1.1", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-15T16:10:32Z
--- base_model: telecomadm1145/gemma-3-cn-novel-4b-v1.1 tags: - text-generation-inference - transformers - unsloth - gemma3 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** telecomadm1145 - **License:** apache-2.0 - **Finetuned from model :** telecomadm1145/gemma-3-cn-novel-4b-v1.1 This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
duchao1210/DPO_Qwen25_3B_128_0.05_1000kmap_lr
duchao1210
2025-06-15T16:06:46Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "base_model:duchao1210/qwen_2.5_3B_5k_r128", "base_model:finetune:duchao1210/qwen_2.5_3B_5k_r128", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-06-15T16:04:52Z
--- base_model: duchao1210/qwen_2.5_3B_5k_r128 tags: - text-generation-inference - transformers - unsloth - qwen2 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** duchao1210 - **License:** apache-2.0 - **Finetuned from model :** duchao1210/qwen_2.5_3B_5k_r128 This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
gradientrouting-spar/mc13_badmed_kl_div_beta_kl-3_epochs-10_seed_1
gradientrouting-spar
2025-06-15T16:05:08Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-15T16:04:53Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
gradientrouting-spar/horizontal_1_proxy_ntrain_25_ntrig_9_random_3x3_seed_1_seed_25_seed_2_seed_42_20250615_155222
gradientrouting-spar
2025-06-15T16:01:42Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-15T16:01:34Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hungnguyennlp/llama-3.2-1b-instruct-lora-test
hungnguyennlp
2025-06-15T16:01:22Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:adapter:meta-llama/Llama-3.2-1B-Instruct", "region:us" ]
null
2025-06-15T16:00:19Z
--- base_model: meta-llama/Llama-3.2-1B-Instruct library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.2
rebego/t5-ladino-espanol
rebego
2025-06-15T15:57:39Z
11
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "translation", "generated_from_trainer", "base_model:google-t5/t5-small", "base_model:finetune:google-t5/t5-small", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
translation
2025-03-13T17:33:04Z
--- library_name: transformers license: apache-2.0 base_model: google-t5/t5-small tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: t5-ladino-espanol results: [] --- # t5-ladino-espanol This model translates from modern Spanish into Judeo-Spanish (Ladino), a historical language of the Sephardic Jewish community. This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) trained on the [collectivat/una-fraza-al-diya](https://huggingface.co/datasets/collectivat/una-fraza-al-diya) dataset, a multilingual corpus designed to support the documentation and preservation of Judeo-Spanish (Ladino), an endangered language spoken historically by Sephardic Jewish communities. It achieves the following results on the evaluation set: - **Loss**: 3.3840 - **BLEU**: 0.0 - **Generated Length**: 5.0 tokens ## Model description This model is based on the T5 architecture and was fine-tuned for a sequence-to-sequence translation task. The goal is to generate translations from Spanish into Ladino, using a small parallel corpus of aligned phrases. ## Intended uses & limitations The model is intended for: - Educational or cultural projects related to the Judeo-Spanish language. - Language preservation and revitalization efforts. - Demonstration of machine translation capabilities for low-resource and endangered languages. **Limitations:** - The model was trained on a very small dataset (only 307 sentence pairs). - It may produce short or incomplete translations. - Orthographic variation is expected, as Ladino does not have a standardized modern spelling. ## Training and evaluation data The training data comes from the dataset [collectivat/una-fraza-al-diya](https://huggingface.co/datasets/collectivat/una-fraza-al-diya), which contains 307 aligned phrases in Ladino, Spanish, Turkish, and English. The dataset was developed by the Sephardic Center of Istanbul as part of a cultural preservation initiative. Only the Spanish-Ladino pairs were used for training this model. The dataset was split into: - **Training set**: 245 examples (80%) - **Validation set**: 31 examples (10%) - **Test set**: 31 examples (10%) ## Training procedure The model was fine-tuned using the `Seq2SeqTrainer` class from Hugging Face's `transformers` library. ### Training hyperparameters The following hyperparameters were used: - **learning_rate**: 5.6e-05 - **train_batch_size**: 8 - **eval_batch_size**: 8 - **seed**: 42 - **optimizer**: AdamW (betas=(0.9, 0.999), epsilon=1e-08) - **lr_scheduler_type**: linear - **num_epochs**: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | BLEU | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:----:|:-------:| | No log | 1.0 | 10 | 3.5388 | 0.0 | 5.0 | | No log | 2.0 | 20 | 3.3840 | 0.0 | 5.0 | ## Framework versions - **Transformers**: 4.49.0 - **PyTorch**: 2.6.0+cu124 - **Datasets**: 3.4.1 - **Tokenizers**: 0.21.1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:----:|:-------:| | No log | 1.0 | 10 | 3.5388 | 0.0 | 5.0 | | No log | 2.0 | 20 | 3.3840 | 0.0 | 5.0 | ### Framework versions - Transformers 4.49.0 - Pytorch 2.6.0+cu124 - Datasets 3.4.1 - Tokenizers 0.21.1
BurnyCoder/EsperBERTo
BurnyCoder
2025-06-15T15:54:59Z
0
0
null
[ "safetensors", "roberta", "eo", "license:mit", "region:us" ]
null
2025-06-15T15:35:49Z
--- language: eo license: mit --- # EsperBERTo: A RoBERTa-like model for Esperanto This is a RoBERTa-like model trained from scratch on the Esperanto language. ## Model description The model has 6 layers, 768 hidden size, 12 attention heads, and a total of 84 million parameters. It's based on the RoBERTa architecture. The tokenizer is a byte-level Byte-Pair Encoding (BPE) tokenizer trained from scratch on the same Esperanto corpus. - **Model:** RoBERTa-like - **Layers:** 6 - **Hidden size:** 768 - **Heads:** 12 - **Parameters:** 84M - **Tokenizer:** Byte-level BPE - **Vocabulary size:** 52,000 ## Training data The model was trained on the Esperanto portion of the OSCAR corpus (`oscar.eo.txt`), which is approximately 3GB in size. ## Training procedure The model was trained for one epoch on the OSCAR corpus using the `Trainer` API from the `transformers` library. The training was performed on a single GPU. ### Hyperparameters - `output_dir`: "./EsperBERTo" - `overwrite_output_dir`: `True` - `num_train_epochs`: 1 - `per_gpu_train_batch_size`: 64 - `save_steps`: 10_000 - `save_total_limit`: 2 - `prediction_loss_only`: `True` The final training loss was `6.1178`. ## Evaluation results The model was not evaluated on a downstream task in the notebook. However, its capabilities can be tested using the `fill-mask` pipeline. Example 1: ```python from transformers import pipeline fill_mask = pipeline( "fill-mask", model="./EsperBERTo", tokenizer="./EsperBERTo" ) fill_mask("La suno <mask>.") ``` Output: ``` [{'score': 0.013023526407778263, 'token': 316, 'token_str': ' estas', 'sequence': 'La suno estas.'}, {'score': 0.008523152209818363, 'token': 607, 'token_str': ' min', 'sequence': 'La suno min.'}, {'score': 0.007405377924442291, 'token': 2575, 'token_str': ' okuloj', 'sequence': 'La suno okuloj.'}, {'score': 0.007219308987259865, 'token': 1635, 'token_str': ' tago', 'sequence': 'La suno tago.'}, {'score': 0.006888304837048054, 'token': 394, 'token_str': ' estis', 'sequence': 'La suno estis.'}] ``` Example 2: ```python fill_mask("Jen la komenco de bela <mask>.") ``` Output: ``` [{'score': 0.016247423365712166, 'token': 1635, 'token_str': ' tago', 'sequence': 'Jen la komenco de bela tago.'}, {'score': 0.009718689136207104, 'token': 1021, 'token_str': ' tempo', 'sequence': 'Jen la komenco de bela tempo.'}, {'score': 0.007543196901679039, 'token': 2257, 'token_str': ' kongreso', 'sequence': 'Jen la komenco de bela kongreso.'}, {'score': 0.0071307034231722355, 'token': 1161, 'token_str': ' vivo', 'sequence': 'Jen la komenco de bela vivo.'}, {'score': 0.006644904613494873, 'token': 758, 'token_str': ' jaroj', 'sequence': 'Jen la komenco de bela jaroj.'}] ``` ## Intended uses & limitations This model is intended to be a general-purpose language model for Esperanto. It can be used for masked language modeling and can be fine-tuned for various downstream tasks such as: - Text Classification - Token Classification (Part-of-Speech Tagging, Named Entity Recognition) - Question Answering Since the model was trained on a relatively small dataset, its performance may be limited. For better results on specific tasks, fine-tuning on a relevant dataset is recommended.
ramses64/t5-small-toinf
ramses64
2025-06-15T15:54:08Z
0
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-small", "base_model:finetune:google-t5/t5-small", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2025-06-15T15:53:57Z
--- library_name: transformers license: apache-2.0 base_model: t5-small tags: - generated_from_trainer model-index: - name: t5-small-toinf results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-toinf This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.3495 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-------:|:----:|:---------------:| | 4.6007 | 0.9479 | 50 | 4.4553 | | 4.3901 | 1.8910 | 100 | 3.8501 | | 3.9927 | 2.8341 | 150 | 3.3790 | | 3.6562 | 3.7773 | 200 | 3.1353 | | 3.4484 | 4.7204 | 250 | 2.9598 | | 3.352 | 5.6635 | 300 | 2.8255 | | 3.1997 | 6.6066 | 350 | 2.7154 | | 3.0431 | 7.5498 | 400 | 2.6390 | | 3.0088 | 8.4929 | 450 | 2.5868 | | 2.9281 | 9.4360 | 500 | 2.5419 | | 2.8857 | 10.3791 | 550 | 2.5115 | | 2.8598 | 11.3223 | 600 | 2.4742 | | 2.828 | 12.2654 | 650 | 2.4441 | | 2.7331 | 13.2085 | 700 | 2.4207 | | 2.7396 | 14.1517 | 750 | 2.4025 | | 2.7336 | 15.0948 | 800 | 2.3858 | | 2.7193 | 16.0379 | 850 | 2.3726 | | 2.7096 | 16.9858 | 900 | 2.3626 | | 2.6839 | 17.9289 | 950 | 2.3562 | | 2.6633 | 18.8720 | 1000 | 2.3512 | | 2.6655 | 19.8152 | 1050 | 2.3495 | ### Framework versions - Transformers 4.52.4 - Pytorch 2.6.0+cu124 - Datasets 2.14.4 - Tokenizers 0.21.1
krissnonflux/colorful-asian-girl-Flux
krissnonflux
2025-06-15T15:53:47Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-06-15T15:16:34Z
--- license: apache-2.0 ---
gradientrouting-spar/horizontal_1_proxy_ntrain_25_ntrig_9_random_3x3_seed_1_seed_25_seed_2_20250615_154252
gradientrouting-spar
2025-06-15T15:52:13Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-15T15:52:05Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
VIRAL-NEW-Link-katrina-lim-kiffy-video/VIRAL.katrina.lim.kiffy.video.Link.viral.On.Social.Media
VIRAL-NEW-Link-katrina-lim-kiffy-video
2025-06-15T15:51:46Z
0
0
null
[ "region:us" ]
null
2025-06-15T15:51:26Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/fn84hrnu?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
LandCruiser/sn29C1_1506_7
LandCruiser
2025-06-15T15:49:48Z
0
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-15T03:26:58Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ekyuho/hyodol-qwen
ekyuho
2025-06-15T15:48:28Z
0
0
null
[ "safetensors", "region:us" ]
null
2025-06-15T15:44:28Z
# hyodol-qwen ํšจ๋Œ์ด: ๋…ธ์ธ ์ผ€์–ด์šฉ ํ•œ๊ตญ์–ด ๊ณต๊ฐ ๋Œ€ํ™” AI ## ์‚ฌ์šฉ๋ฒ• ```python from transformers import AutoModelForCausalLM, AutoTokenizer from peft import PeftModel # ๋ชจ๋ธ ๋กœ๋“œ base_model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2.5-3B-Instruct") model = PeftModel.from_pretrained(base_model, "ekyuho/hyodol-qwen") tokenizer = AutoTokenizer.from_pretrained("ekyuho/hyodol-qwen") # ๋Œ€ํ™” ์ƒ์„ฑ prompt = "ํšจ๋Œ์•„, ์˜ค๋Š˜ ์™ธ๋กœ์›Œ..." inputs = tokenizer(prompt, return_tensors="pt") outputs = model.generate(**inputs, max_new_tokens=100) response = tokenizer.decode(outputs[0], skip_special_tokens=True) print(response) ``` ## ์ •๋ณด - ๋ฒ ์ด์Šค ๋ชจ๋ธ: Qwen/Qwen2.5-3B-Instruct - ํŒŒ์ธํŠœ๋‹: LoRA - ์–ธ์–ด: ํ•œ๊ตญ์–ด - ์šฉ๋„: ๋…ธ์ธ ์ผ€์–ด ๋Œ€ํ™”
alex2020/simplellm
alex2020
2025-06-15T15:45:00Z
138
0
null
[ "simplellm", "custom_code", "license:apache-2.0", "region:us" ]
null
2025-05-08T15:18:16Z
--- license: apache-2.0 ---
MinaMila/gemma_2b_unlearned_2nd_5e-7_1.0_0.15_0.75_0.25_epoch1
MinaMila
2025-06-15T15:44:16Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-15T15:42:28Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jaeyong2/Qwen3-0.6B-DPO-Peft
jaeyong2
2025-06-15T15:43:19Z
129
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "en", "ko", "base_model:Qwen/Qwen3-0.6B", "base_model:finetune:Qwen/Qwen3-0.6B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-08T01:23:05Z
--- library_name: transformers license: apache-2.0 language: - en - ko base_model: - Qwen/Qwen3-0.6B --- ### Training Data 1. [jaeyong2/Qwen3-06B-Ko-DPO](https://huggingface.co/datasets/jaeyong2/Qwen3-06B-Ko-DPO) 2. [jaeyong2/Qwen3-06B-Ko-DPO-2](https://huggingface.co/datasets/jaeyong2/Qwen3-06B-Ko-DPO-2) 3. [jaeyong2/Qwen3-06B-Ko-DPO-3](https://huggingface.co/datasets/jaeyong2/Qwen3-06B-Ko-DPO-3) 4. [jaeyong2/Qwen3-06B-En-DPO-2](https://huggingface.co/datasets/jaeyong2/Qwen3-06B-En-DPO-2) ## Evaluation ``` !lm_eval --model hf \ --model_args pretrained=jaeyong2/Qwen3-0.6B-DPO \ --tasks kmmlu,mmlu,gsm8k \ --device cuda:0 \ --batch_size 1 \ --num_fewshot 5 ``` | (5-shot) | Qwen3-0.6B-DPO | Qwen3-0.6B | naver-hyperclovax/HyperCLOVAX-SEED-Text-Instruct-0.5B| |:-----------|----------------------:|----------------------:|-----------------------------------------------------:| | MMLU | 0.47 | 0.47 | 0.44 | | KMMLU | 0.34 | 0.35 | 0.38 | | GSM8K | 0.47 | 0.42 | 0.39 | ## License - Qwen/Qwen3-0.6B : https://choosealicense.com/licenses/apache-2.0/ ## Acknowledgement This research is supported by **TPU Research Cloud program**.
gradientrouting-spar/horizontal_1_proxy_ntrain_25_ntrig_9_random_3x3_seed_1_seed_25_20250615_153324
gradientrouting-spar
2025-06-15T15:42:43Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-15T15:42:35Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]