modelId
string
author
string
last_modified
timestamp[us, tz=UTC]
downloads
int64
likes
int64
library_name
string
tags
sequence
pipeline_tag
string
createdAt
timestamp[us, tz=UTC]
card
string
thucdangvan020999/ultravox_test_9
thucdangvan020999
2025-05-21T14:35:24Z
0
0
transformers
[ "transformers", "safetensors", "ultravox", "feature-extraction", "custom_code", "arxiv:1910.09700", "region:us" ]
feature-extraction
2025-05-21T14:35:12Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
BootesVoid/cmaxzrd66034tu1cgol1cf8i6_cmay0liwf035bu1cgc0b0vv1r
BootesVoid
2025-05-21T14:35:20Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-05-21T14:35:18Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: GAMERGIRL --- # Cmaxzrd66034Tu1Cgol1Cf8I6_Cmay0Liwf035Bu1Cgc0B0Vv1R <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `GAMERGIRL` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "GAMERGIRL", "lora_weights": "https://huggingface.co/BootesVoid/cmaxzrd66034tu1cgol1cf8i6_cmay0liwf035bu1cgc0b0vv1r/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cmaxzrd66034tu1cgol1cf8i6_cmay0liwf035bu1cgc0b0vv1r', weight_name='lora.safetensors') image = pipeline('GAMERGIRL').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cmaxzrd66034tu1cgol1cf8i6_cmay0liwf035bu1cgc0b0vv1r/discussions) to add images that show off what you’ve made with this LoRA.
francisc0Sousa/distilbert-base-uncased-finetuned-imdb
francisc0Sousa
2025-05-21T14:35:18Z
0
0
transformers
[ "transformers", "safetensors", "distilbert", "fill-mask", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2025-05-21T13:27:41Z
--- library_name: transformers license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.2231 - Model Preparation Time: 0.003 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | |:-------------:|:-----:|:----:|:---------------:|:----------------------:| | 2.5665 | 1.0 | 27 | 2.2125 | 0.003 | | 2.202 | 2.0 | 54 | 2.0682 | 0.003 | | 2.2111 | 3.0 | 81 | 2.1069 | 0.003 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cpu - Datasets 3.3.2 - Tokenizers 0.21.1
xw17/Phi-3.5-mini-instruct_finetuned_4_optimized1_task_grouping_off_FT
xw17
2025-05-21T14:34:13Z
0
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "trl", "sft", "conversational", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-21T14:30:12Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
dzanbek/38e89719-899f-402a-9d3b-13b7d7fd690f
dzanbek
2025-05-21T14:30:11Z
0
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "axolotl", "dpo", "trl", "unsloth", "conversational", "arxiv:2305.18290", "base_model:unsloth/Qwen2-7B", "base_model:quantized:unsloth/Qwen2-7B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-05-21T13:57:48Z
--- base_model: unsloth/Qwen2-7B library_name: transformers model_name: 38e89719-899f-402a-9d3b-13b7d7fd690f tags: - generated_from_trainer - axolotl - dpo - trl - unsloth licence: license --- # Model Card for 38e89719-899f-402a-9d3b-13b7d7fd690f This model is a fine-tuned version of [unsloth/Qwen2-7B](https://huggingface.co/unsloth/Qwen2-7B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="dzanbek/38e89719-899f-402a-9d3b-13b7d7fd690f", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dedok-yo/s56-2/runs/l2v053xf) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.12.0.dev0 - Transformers: 4.46.0 - Pytorch: 2.5.0+cu124 - Datasets: 3.0.1 - Tokenizers: 0.20.1 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
silveroxides/flan-t5-xxl-encoder-only
silveroxides
2025-05-21T14:28:56Z
0
0
null
[ "t5", "license:apache-2.0", "region:us" ]
null
2025-05-21T14:03:32Z
--- license: apache-2.0 ---
sergioalves/84c63225-ef84-4400-8097-f0f4d5ec7373
sergioalves
2025-05-21T14:27:26Z
0
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "axolotl", "dpo", "trl", "unsloth", "conversational", "arxiv:2305.18290", "base_model:unsloth/Qwen2-7B", "base_model:quantized:unsloth/Qwen2-7B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-05-21T14:00:41Z
--- base_model: unsloth/Qwen2-7B library_name: transformers model_name: 84c63225-ef84-4400-8097-f0f4d5ec7373 tags: - generated_from_trainer - axolotl - dpo - trl - unsloth licence: license --- # Model Card for 84c63225-ef84-4400-8097-f0f4d5ec7373 This model is a fine-tuned version of [unsloth/Qwen2-7B](https://huggingface.co/unsloth/Qwen2-7B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="sergioalves/84c63225-ef84-4400-8097-f0f4d5ec7373", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dedok-yo/s56-7/runs/ktuimwlo) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.12.0.dev0 - Transformers: 4.46.0 - Pytorch: 2.5.0+cu124 - Datasets: 3.0.1 - Tokenizers: 0.20.1 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
MinhViet/Llama-3.2-3B-Instruct_64_5epoch
MinhViet
2025-05-21T14:27:22Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-05-21T14:27:03Z
--- base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** MinhViet - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Eric1227/dolphin-2.5-mixtral-8x7b-MLX-8bit
Eric1227
2025-05-21T14:27:00Z
0
0
mlx
[ "mlx", "safetensors", "mixtral", "text-generation", "conversational", "en", "dataset:ehartford/dolphin", "dataset:jondurbin/airoboros-2.2.1", "dataset:ehartford/dolphin-coder", "dataset:migtissera/Synthia-v1.3", "dataset:teknium/openhermes", "dataset:ise-uiuc/Magicoder-OSS-Instruct-75K", "dataset:ise-uiuc/Magicoder-Evol-Instruct-110K", "dataset:LDJnr/Pure-Dove", "base_model:cognitivecomputations/dolphin-2.5-mixtral-8x7b", "base_model:quantized:cognitivecomputations/dolphin-2.5-mixtral-8x7b", "license:apache-2.0", "8-bit", "region:us" ]
text-generation
2025-05-21T10:29:56Z
--- datasets: - ehartford/dolphin - jondurbin/airoboros-2.2.1 - ehartford/dolphin-coder - migtissera/Synthia-v1.3 - teknium/openhermes - ise-uiuc/Magicoder-OSS-Instruct-75K - ise-uiuc/Magicoder-Evol-Instruct-110K - LDJnr/Pure-Dove language: - en license: apache-2.0 base_model: cognitivecomputations/dolphin-2.5-mixtral-8x7b library_name: mlx tags: - mlx pipeline_tag: text-generation ---
dimasik2987/258c5516-b375-49a2-aab0-a0b0b108b504
dimasik2987
2025-05-21T14:25:28Z
0
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "mistral", "text-generation", "generated_from_trainer", "axolotl", "dpo", "trl", "unsloth", "conversational", "arxiv:2305.18290", "base_model:unsloth/mistral-7b-instruct-v0.2", "base_model:quantized:unsloth/mistral-7b-instruct-v0.2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-05-21T14:07:02Z
--- base_model: unsloth/mistral-7b-instruct-v0.2 library_name: transformers model_name: 258c5516-b375-49a2-aab0-a0b0b108b504 tags: - generated_from_trainer - axolotl - dpo - trl - unsloth licence: license --- # Model Card for 258c5516-b375-49a2-aab0-a0b0b108b504 This model is a fine-tuned version of [unsloth/mistral-7b-instruct-v0.2](https://huggingface.co/unsloth/mistral-7b-instruct-v0.2). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="dimasik2987/258c5516-b375-49a2-aab0-a0b0b108b504", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dedok-yo/s56-7/runs/bxvx1lug) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.12.0.dev0 - Transformers: 4.46.0 - Pytorch: 2.5.0+cu124 - Datasets: 3.0.1 - Tokenizers: 0.20.1 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
heroprotagonist/Meta-Llama-3.1-8B-bnb-4bit-foroz-rand-10k-F16-GGUF
heroprotagonist
2025-05-21T14:22:53Z
0
0
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "llama", "trl", "llama-cpp", "gguf-my-lora", "en", "base_model:heroprotagonist/Meta-Llama-3.1-8B-bnb-4bit-foroz-rand-10k", "base_model:quantized:heroprotagonist/Meta-Llama-3.1-8B-bnb-4bit-foroz-rand-10k", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-05-21T14:22:51Z
--- base_model: heroprotagonist/Meta-Llama-3.1-8B-bnb-4bit-foroz-rand-10k tags: - text-generation-inference - transformers - unsloth - llama - trl - llama-cpp - gguf-my-lora license: apache-2.0 language: - en --- # heroprotagonist/Meta-Llama-3.1-8B-bnb-4bit-foroz-rand-10k-F16-GGUF This LoRA adapter was converted to GGUF format from [`heroprotagonist/Meta-Llama-3.1-8B-bnb-4bit-foroz-rand-10k`](https://huggingface.co/heroprotagonist/Meta-Llama-3.1-8B-bnb-4bit-foroz-rand-10k) via the ggml.ai's [GGUF-my-lora](https://huggingface.co/spaces/ggml-org/gguf-my-lora) space. Refer to the [original adapter repository](https://huggingface.co/heroprotagonist/Meta-Llama-3.1-8B-bnb-4bit-foroz-rand-10k) for more details. ## Use with llama.cpp ```bash # with cli llama-cli -m base_model.gguf --lora Meta-Llama-3.1-8B-bnb-4bit-foroz-rand-10k-f16.gguf (...other args) # with server llama-server -m base_model.gguf --lora Meta-Llama-3.1-8B-bnb-4bit-foroz-rand-10k-f16.gguf (...other args) ``` To know more about LoRA usage with llama.cpp server, refer to the [llama.cpp server documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md).
onnx-community/pythia-14m-ONNX
onnx-community
2025-05-21T14:22:28Z
0
0
transformers.js
[ "transformers.js", "onnx", "gpt_neox", "text-generation", "base_model:EleutherAI/pythia-14m", "base_model:quantized:EleutherAI/pythia-14m", "region:us" ]
text-generation
2025-05-21T14:22:26Z
--- library_name: transformers.js base_model: - EleutherAI/pythia-14m --- # pythia-14m (ONNX) This is an ONNX version of [EleutherAI/pythia-14m](https://huggingface.co/EleutherAI/pythia-14m). It was automatically converted and uploaded using [this space](https://huggingface.co/spaces/onnx-community/convert-to-onnx).
aipglu/my_awesome_food_model
aipglu
2025-05-21T14:22:05Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2025-05-21T14:16:26Z
--- library_name: transformers license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_trainer metrics: - accuracy model-index: - name: my_awesome_food_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_food_model This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6199 - Accuracy: 0.892 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.7531 | 1.0 | 63 | 2.5553 | 0.829 | | 1.8465 | 2.0 | 126 | 1.7735 | 0.879 | | 1.6267 | 2.96 | 186 | 1.6199 | 0.892 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
pubgmob1024/MindMate_v1
pubgmob1024
2025-05-21T14:21:29Z
16
1
null
[ "safetensors", "gpt2", "base_model:openai-community/gpt2", "base_model:finetune:openai-community/gpt2", "endpoints_compatible", "region:us" ]
null
2025-05-09T06:26:33Z
--- base_model: - openai-community/gpt2 ---
dimasik87/c06f9d68-2edb-4306-9ae8-0928ba69902b
dimasik87
2025-05-21T14:21:17Z
0
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "axolotl", "dpo", "trl", "unsloth", "conversational", "arxiv:2305.18290", "base_model:unsloth/Qwen2-7B", "base_model:quantized:unsloth/Qwen2-7B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-05-21T14:00:02Z
--- base_model: unsloth/Qwen2-7B library_name: transformers model_name: c06f9d68-2edb-4306-9ae8-0928ba69902b tags: - generated_from_trainer - axolotl - dpo - trl - unsloth licence: license --- # Model Card for c06f9d68-2edb-4306-9ae8-0928ba69902b This model is a fine-tuned version of [unsloth/Qwen2-7B](https://huggingface.co/unsloth/Qwen2-7B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="dimasik87/c06f9d68-2edb-4306-9ae8-0928ba69902b", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dedok-yo/s56-7/runs/ooojcpe4) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.12.0.dev0 - Transformers: 4.46.0 - Pytorch: 2.5.0+cu124 - Datasets: 3.0.1 - Tokenizers: 0.20.1 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
YosefA/wave2vec2_amharic_stt
YosefA
2025-05-21T14:21:02Z
0
0
speechbrain
[ "speechbrain", "wav2vec2", "automatic-speech-recognition", "asr", "amharic", "speech-to-text", "social-media", "region:us" ]
automatic-speech-recognition
2025-01-16T18:56:52Z
--- tags: - automatic-speech-recognition - asr - amharic - speech-to-text - wav2vec2 - speechbrain - social-media --- # Amharic Speech-to-Text Transcription Model This model transcribes Amharic speech to text. It's built on **Facebook's Wav2Vec2** and trained using **SpeechBrain**. ## Intended Use Its main purpose is to transcribe audio from **Instagram, YouTube, and TikTok video content** for further analysis (e.g., trend identification, content moderation). ## Limitations Performance may vary with audio quality, background noise, and informal speech commonly found in social media.
DanielNRU/pollen-ner-600
DanielNRU
2025-05-21T14:19:28Z
8
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:DeepPavlov/rubert-base-cased", "base_model:adapter:DeepPavlov/rubert-base-cased", "region:us" ]
null
2025-05-19T13:58:45Z
--- library_name: peft base_model: DeepPavlov/rubert-base-cased tags: - generated_from_trainer metrics: - precision - recall - f1 model-index: - name: pollen-ner-600 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pollen-ner-600 This model is a fine-tuned version of [DeepPavlov/rubert-base-cased](https://huggingface.co/DeepPavlov/rubert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2528 - Precision: 0.7097 - Recall: 0.8494 - F1: 0.7733 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:| | No log | 1.0 | 75 | 0.2670 | 0.6787 | 0.8273 | 0.7457 | | No log | 2.0 | 150 | 0.2605 | 0.6876 | 0.8353 | 0.7543 | | No log | 3.0 | 225 | 0.2530 | 0.6985 | 0.8373 | 0.7616 | | No log | 4.0 | 300 | 0.2633 | 0.6839 | 0.8514 | 0.7585 | | No log | 5.0 | 375 | 0.2528 | 0.7097 | 0.8494 | 0.7733 | | No log | 6.0 | 450 | 0.2523 | 0.7078 | 0.8514 | 0.7730 | | 0.4861 | 7.0 | 525 | 0.2531 | 0.7052 | 0.8454 | 0.7689 | ### Framework versions - PEFT 0.15.2 - Transformers 4.51.3 - Pytorch 2.7.0+cu128 - Datasets 3.5.0 - Tokenizers 0.21.1
c0ntrolZ/third-test
c0ntrolZ
2025-05-21T14:14:56Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "generated_from_trainer", "conversational", "base_model:Qwen/Qwen3-0.6B-Base", "base_model:finetune:Qwen/Qwen3-0.6B-Base", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-21T12:30:39Z
--- library_name: transformers license: apache-2.0 base_model: Qwen/Qwen3-0.6B-Base tags: - generated_from_trainer model-index: - name: third-test results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # third-test This model is a fine-tuned version of [Qwen/Qwen3-0.6B-Base](https://huggingface.co/Qwen/Qwen3-0.6B-Base) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.52.0 - Pytorch 2.5.1+cu121 - Datasets 3.6.0 - Tokenizers 0.21.1
pubgmob1024/MindMate_v2
pubgmob1024
2025-05-21T14:14:55Z
34
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "conversational", "base_model:pubgmob1024/MindMate_v1", "base_model:finetune:pubgmob1024/MindMate_v1", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-10T18:06:55Z
--- library_name: transformers base_model: pubgmob1024/MindMate tags: - generated_from_trainer model-index: - name: empathetic_dialogues_finetune results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # MindMate_v2 This model is a fine-tuned version of [pubgmob1024/MindMate](https://huggingface.co/pubgmob1024/MindMate) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.0421 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.9907 | 1.0 | 2231 | 2.0655 | | 1.9061 | 2.0 | 4462 | 2.0435 | | 1.8051 | 2.9989 | 6690 | 2.0421 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
TIPO-Anonymous/TIPO-500M-ft
TIPO-Anonymous
2025-05-21T14:14:42Z
0
0
transformers
[ "transformers", "safetensors", "gguf", "llama", "text-generation", "en", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-21T13:55:47Z
--- license: other language: - en pipeline_tag: text-generation library_name: transformers --- # TIPO: Text to Image with text presampling for Prompt Optimization 500M LLaMA arch model trained for TIPO.
KMS1/headshot_v1
KMS1
2025-05-21T14:14:27Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-05-21T12:34:41Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: KEVINSCHINDLER --- # Headshot_V1 <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `KEVINSCHINDLER` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "KEVINSCHINDLER", "lora_weights": "https://huggingface.co/KMS1/headshot_v1/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('KMS1/headshot_v1', weight_name='lora.safetensors') image = pipeline('KEVINSCHINDLER').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/KMS1/headshot_v1/discussions) to add images that show off what you’ve made with this LoRA.
DanielNRU/pollen-ner-550
DanielNRU
2025-05-21T14:14:17Z
6
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:DeepPavlov/rubert-base-cased", "base_model:adapter:DeepPavlov/rubert-base-cased", "region:us" ]
null
2025-05-20T10:02:16Z
--- library_name: peft base_model: DeepPavlov/rubert-base-cased tags: - generated_from_trainer metrics: - precision - recall - f1 model-index: - name: pollen-ner-550 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pollen-ner-550 This model is a fine-tuned version of [DeepPavlov/rubert-base-cased](https://huggingface.co/DeepPavlov/rubert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2726 - Precision: 0.6821 - Recall: 0.8273 - F1: 0.7477 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:| | No log | 1.0 | 69 | 0.3118 | 0.6336 | 0.8092 | 0.7108 | | No log | 2.0 | 138 | 0.2953 | 0.6531 | 0.8052 | 0.7212 | | No log | 3.0 | 207 | 0.3014 | 0.6461 | 0.8213 | 0.7233 | | No log | 4.0 | 276 | 0.2994 | 0.6468 | 0.8273 | 0.7260 | | No log | 5.0 | 345 | 0.2892 | 0.6493 | 0.8253 | 0.7268 | | No log | 6.0 | 414 | 0.2851 | 0.6597 | 0.8293 | 0.7349 | | No log | 7.0 | 483 | 0.2726 | 0.6821 | 0.8273 | 0.7477 | | 0.5309 | 8.0 | 552 | 0.2751 | 0.6743 | 0.8273 | 0.7430 | | 0.5309 | 9.0 | 621 | 0.2797 | 0.6705 | 0.8293 | 0.7415 | ### Framework versions - PEFT 0.15.2 - Transformers 4.51.3 - Pytorch 2.7.0+cu128 - Datasets 3.5.0 - Tokenizers 0.21.1
phospho-app/MarcWester-gr00t-m5-cu07s
phospho-app
2025-05-21T14:09:43Z
0
0
null
[ "safetensors", "gr00t_n1", "phosphobot", "gr00t", "region:us" ]
null
2025-05-21T13:47:23Z
--- tags: - phosphobot - gr00t task_categories: - robotics --- # gr00t Model - phospho Training Pipeline ## This model was trained using **phospho**. Training was successfull, try it out on your robot! ## Training parameters: - **Dataset**: [MarcWester/m5](https://huggingface.co/datasets/MarcWester/m5) - **Wandb run URL**: None - **Epochs**: 10 - **Batch size**: 27 - **Training steps**: None 📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme) 🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
phospho-app/nonosax-gr00t-example_dataset-242q3
phospho-app
2025-05-21T14:09:17Z
0
0
null
[ "phosphobot", "gr00t", "region:us" ]
null
2025-05-21T14:07:22Z
--- tags: - phosphobot - gr00t task_categories: - robotics --- # gr00t Model - phospho Training Pipeline ## Error Traceback We faced an issue while training your model. ``` Traceback (most recent call last): File "/root/src/helper.py", line 229, in predict trainer.train(timeout_seconds=timeout_seconds) File "/root/phosphobot/am/gr00t.py", line 1067, in train asyncio.run( File "/opt/conda/lib/python3.11/asyncio/runners.py", line 190, in run return runner.run(main) ^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/asyncio/runners.py", line 118, in run return self._loop.run_until_complete(task) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete return future.result() ^^^^^^^^^^^^^^^ File "/root/phosphobot/am/gr00t.py", line 967, in run_gr00t_training raise RuntimeError(error_msg) RuntimeError: Training process failed with exit code 1: return self.get_video(trajectory_id, key, base_index) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/workspace/gr00t/data/dataset.py", line 644, in get_video trajectory_index = self.get_trajectory_index(trajectory_id) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/workspace/gr00t/data/dataset.py", line 557, in get_trajectory_index raise ValueError( ValueError: Error finding trajectory index for 14, found trajectory_indices=array([12, 13]) 0%| | 0/1000 [00:03<?, ?it/s] ``` ## Training parameters: - **Dataset**: [nonosax/example_dataset](https://huggingface.co/datasets/nonosax/example_dataset) - **Wandb run URL**: None - **Epochs**: 10 - **Batch size**: 27 - **Training steps**: None 📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme) 🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
gamezface/gamezface
gamezface
2025-05-21T14:09:12Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-05-21T13:43:30Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: gamezface --- # Gamezface <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `gamezface` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "gamezface", "lora_weights": "https://huggingface.co/gamezface/gamezface/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('gamezface/gamezface', weight_name='lora.safetensors') image = pipeline('gamezface').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/gamezface/gamezface/discussions) to add images that show off what you’ve made with this LoRA.
classla/ParlaCAP-Topic-Classifier
classla
2025-05-21T14:09:06Z
51
0
null
[ "safetensors", "xlm-roberta", "topic", "parliamentary", "agenda-topic", "CAP", "base_model:classla/xlm-r-parla", "base_model:finetune:classla/xlm-r-parla", "region:us" ]
null
2025-05-13T08:54:31Z
--- base_model: - classla/xlm-r-parla tags: - topic - parliamentary - agenda-topic - CAP --- # Multilingual ParlaCAP model for CAP Topic Classification in Parliamentary Speeches The ParlaCAP model is a text classification model that assigns topic categories to parliamentary speeches according to the [CAP (Comparative Agendas Project) schema](https://www.comparativeagendas.net/pages/master-codebook). This classification model is based on the multilingual parliamentary [XLM-R-Parla](https://huggingface.co/classla/xlm-r-parla) BERT-like model, which is a XLM-RoBERTa-large model that was additionally pre-trained on texts of parliamentary proceedings. To develop the ParlaCAP model, XLM-R-Parla was additionally fine-tuned on 29,779 instances (speeches) from 29 [ParlaMint 4.1](http://hdl.handle.net/11356/1912) datasets containing transcriptions of parliamentary debates of 29 European countries and autonomous regions. The speeches were automatically annotated with 22 CAP labels (21 major topics and a label "Other") using the GPT-4o model in a zero-shot prompting fashion following the [LLM teacher-student framework](https://ieeexplore.ieee.org/document/10900365). Evaluation of the GPT model has shown that its annotation performance is comparable to those of human annotators. The fine-tuned ParlaCAP model achieves 0.752 in macro-F1 on an English test set (440 instances from ParlaMint-GB 4.1, balanced by labels) and 0.694 in macro-F1 on a Croatian test set (440 instances from ParlaMint-HR 4.1, balanced by labels). An additional evaluation on smaller samples from Czech ParlaMint-CZ, Bulgarian ParlaMint-BG and Ukrainian ParlaMint-UA datasets shows that the model achieves macro-F1 scores of 0.736, 0.75 and 0.805 on these three test datasets, respectively. For end use scenarios, we recommend filtering out predictions based on the model's prediction confidence. When the model was applied to the ParlaMint datasets, we annotated instances that were predicted with confidence below 0.60 as "Mix". With this approach, we annotate as Mix: - 8.6% of instances in the English test set - 11.4% of instances in the Croatian test set Performance of the model on the remaining instances (all instances not annotated as "Mix"): | | micro-F1 | macro-F1 | accuracy | |:---|-----------:|-----------:|-----------:| | EN | 0.780 | 0.779 | 0.779 | | HR | 0.724 | 0.726 | 0.724 | ## Use To use the model: ``` from transformers import pipeline # Load a multi-class classification pipeline # if the model runs on CPU, comment out "device" classifier = pipeline("text-classification", model="classla/ParlaCAP-Topic-Classifier", device=0, max_length=512, truncation=True) # Example texts to classify texts = [ """I engage regularly with the CPS, and we recognise that this issue is a growing national priority. Prosecution rates have been rising year on year for knife crime. Between 2013-14 and 2017-18, there has been a 33% increase. The Offensive Weapons Bill now making its way through this House will tighten the law around the sale, delivery and possession of knives.""", """I appreciate that there are pressures in the hon. Gentleman’s constituency. I think most hon. Members would say that there are pressures in their constituency when it comes to general practice, so what have we done so far? Let me put it that way. This year, 3,157 medical school graduates will go on to specialise in general practice, which is the highest ever, but we still have to do more to improve the retention of GPs who are approaching retirement."""] # Classify the texts results = classifier(texts) # Output the results for result in results: print(result) ## Output ##{'label': 'Law and Crime', 'score': 0.9945019483566284} ##{'label': 'Health', 'score': 0.9890311360359192} ``` ## CAP Label definition We use 21 [CAP](https://www.comparativeagendas.net/) majortopics + category "Other" - 22 labels. The label description: ```python label_list = ["Education", "Technology", "Health", "Environment", "Housing", "Labor", "Defense", "Government Operations", "Social Welfare", "Other", "Macroeconomics", "Domestic Commerce", "Civil Rights", "International Affairs", "Transportation", "Immigration", "Law and Crime", "Agriculture", "Foreign Trade", "Culture", "Public Lands", "Energy"] majortopics_description = { 'Macroeconomics - issues related to domestic macroeconomic policy, such as the state and prospect of the national economy, economic policy, inflation, interest rates, monetary policy, cost of living, unemployment rate, national budget, public debt, price control, tax enforcement, industrial revitalization and growth.': 1, 'Civil Rights - issues related to civil rights and minority rights, discrimination towards races, gender, sexual orientation, handicap, and other minorities, voting rights, freedom of speech, religious freedoms, privacy rights, protection of personal data, abortion rights, anti-government activity groups (e.g., local insurgency groups), religion and the Church.': 2, 'Health - issues related to health care, health care reforms, health insurance, drug industry, medical facilities, medical workers, disease prevention, treatment, and health promotion, drug and alcohol abuse, mental health, research in medicine, medical liability and unfair medical practices.': 3, 'Agriculture - issues related to agriculture policy, fishing, agricultural foreign trade, food marketing, subsidies to farmers, food inspection and safety, animal and crop disease, pest control and pesticide regulation, welfare for animals in farms, pets, veterinary medicine, agricultural research.': 4, 'Labor - issues related to labor, employment, employment programs, employee benefits, pensions and retirement accounts, minimum wage, labor law, job training, labor unions, worker safety and protection, youth employment and seasonal workers.': 5, 'Education - issues related to educational policies, primary and secondary schools, student loans and education finance, the regulation of colleges and universities, school reforms, teachers, vocational training, evening schools, safety in schools, efforts to improve educational standards, and issues related to libraries, dictionaries, teaching material, research in education.': 6, 'Environment - issues related to environmental policy, drinking water safety, all kinds of pollution (air, noise, soil), waste disposal, recycling, climate change, outdoor environmental hazards (e.g., asbestos), species and forest protection, marine and freshwater environment, hunting, regulation of laboratory or performance animals, land and water resource conservation, research in environmental technology.': 7, 'Energy - issues related to energy policy, electricity, regulation of electrical utilities, nuclear energy and disposal of nuclear waste, natural gas and oil, drilling, oil spills, oil and gas prices, heat supply, shortages and gasoline regulation, coal production, alternative and renewable energy, energy conservation and energy efficiency, energy research.': 8, 'Immigration - issues related to immigration, refugees, and citizenship, integration issues, regulation of residence permits, asylum applications; criminal offences and diseases caused by immigration.': 9, 'Transportation - issues related to mass transportation construction and regulation, bus transport, regulation related to motor vehicles, road construction, maintenance and safety, parking facilities, traffic accidents statistics, air travel, rail travel, rail freight, maritime transportation, inland waterways and channels, transportation research and development.': 10, 'Law and Crime - issues related to the control, prevention, and impact of crime; all law enforcement agencies, including border and customs, police, court system, prison system; terrorism, white collar crime, counterfeiting and fraud, cyber-crime, drug trafficking, domestic violence, child welfare, family law, juvenile crime.': 12, 'Social Welfare - issues related to social welfare policy, the Ministry of Social Affairs, social services, poverty assistance for low-income families and for the elderly, parental leave and child care, assistance for people with physical or mental disabilities, including early retirement pension, discounts on public services, volunteer associations (e.g., Red Cross), charities, and youth organizations.': 13, 'Housing - issues related to housing, urban affairs and community development, housing market, property tax, spatial planning, rural development, location permits, construction inspection, illegal construction, industrial and commercial building issues, national housing policy, housing for low-income individuals, rental housing, housing for the elderly, e.g., nursing homes, housing for the homeless and efforts to reduce homelessness, research related to housing.': 14, 'Domestic Commerce - issues related to banking, finance and internal commerce, including stock exchange, investments, consumer finance, mortgages, credit cards, insurance availability and cost, accounting regulation, personal, commercial, and municipal bankruptcies, programs to promote small businesses, copyrights and patents, intellectual property, natural disaster preparedness and relief, consumer safety; regulation and promotion of tourism, sports, gambling, and personal fitness; domestic commerce research.': 15, 'Defense - issues related to defense policy, military intelligence, espionage, weapons, military personnel, reserve forces, military buildings, military courts, nuclear weapons, civil defense, including firefighters and mountain rescue services, homeland security, military aid or arms sales to other countries, prisoners of war and collateral damage to civilian populations, military nuclear and hazardous waste disposal and military environmental compliance, defense alliances and agreements, direct foreign military operations, claims against military, defense research.': 16, 'Technology - issues related to science and technology transfer and international science cooperation, research policy, government space programs and space exploration, telephones and telecommunication regulation, broadcast media (television, radio, newspapers, films), weather forecasting, geological surveys, computer industry, cyber security.': 17, 'Foreign Trade - issues related to foreign trade, trade negotiations, free trade agreements, import regulation, export promotion and regulation, subsidies, private business investment and corporate development, competitiveness, exchange rates, the strength of national currency in comparison to other currencies, foreign investment and sales of companies abroad.': 18, 'International Affairs - issues related to international affairs, foreign policy and relations to other countries, issues related to the Ministry of Foreign Affairs, foreign aid, international agreements (such as Kyoto agreement on the environment, the Schengen agreement), international organizations (including United Nations, UNESCO, International Olympic Committee, International Criminal Court), NGOs, issues related to diplomacy, embassies, citizens abroad; issues related to border control; issues related to international finance, including the World Bank and International Monetary Fund, the financial situation of the EU; issues related to a foreign country that do not impact the home country; issues related to human rights in other countries, international terrorism.': 19, 'Government Operations - issues related to general government operations, the work of multiple departments, public employees, postal services, nominations and appointments, national mints, medals, and commemorative coins, management of government property, government procurement and contractors, public scandal and impeachment, claims against the government, the state inspectorate and audit, anti-corruption policies, regulation of political campaigns, political advertising and voter registration, census and statistics collection by government; issues related to local government, capital city and municipalities, including decentralization; issues related to national holidays.': 20, 'Public Lands - issues related to national parks, memorials, historic sites, and protected areas, including the management and staffing of cultural sites; museums; use of public lands and forests, establishment and management of harbors and marinas; issues related to flood control, forest fires, livestock grazing.': 21, 'Culture - issues related to cultural policies, Ministry of Culture, public spending on culture, cultural employees, issues related to support of theatres and artists; allocation of funds from the national lottery, issues related to cultural heritage': 23, 'Other - other topics not mentioning policy agendas, including the procedures of parliamentary meetings, e.g., points of order, voting procedures, meeting logistics; interpersonal speech, e.g., greetings, personal stories, tributes, interjections, arguments between the members; rhetorical speech, e.g., jokes, literary references.': 0 } ```
mbiarreta/swin-camdeboo
mbiarreta
2025-05-21T14:08:38Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "swin", "image-classification", "generated_from_trainer", "base_model:microsoft/swin-tiny-patch4-window7-224", "base_model:finetune:microsoft/swin-tiny-patch4-window7-224", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2025-05-21T09:59:07Z
--- library_name: transformers license: apache-2.0 base_model: microsoft/swin-tiny-patch4-window7-224 tags: - image-classification - generated_from_trainer metrics: - accuracy model-index: - name: swin-camdeboo results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-camdeboo This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the camdeboo dataset. It achieves the following results on the evaluation set: - Loss: 1.7505 - Accuracy: 0.5693 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 1.6714 | 0.0647 | 100 | 2.1798 | 0.3780 | | 1.3024 | 0.1294 | 200 | 2.0776 | 0.4628 | | 1.2379 | 0.1942 | 300 | 2.1343 | 0.4699 | | 0.921 | 0.2589 | 400 | 2.0805 | 0.4909 | | 1.2239 | 0.3236 | 500 | 1.8752 | 0.5048 | | 1.1477 | 0.3883 | 600 | 2.1270 | 0.4671 | | 1.0999 | 0.4531 | 700 | 1.8343 | 0.5404 | | 0.8614 | 0.5178 | 800 | 2.1993 | 0.4814 | | 1.4409 | 0.5825 | 900 | 2.0452 | 0.5436 | | 1.1298 | 0.6472 | 1000 | 2.0431 | 0.5234 | | 0.6623 | 0.7120 | 1100 | 1.8523 | 0.5626 | | 0.5556 | 0.7767 | 1200 | 1.9374 | 0.5745 | | 0.6044 | 0.8414 | 1300 | 2.1097 | 0.5293 | | 0.7965 | 0.9061 | 1400 | 1.7505 | 0.5693 | | 0.6878 | 0.9709 | 1500 | 2.0018 | 0.5590 | | 0.763 | 1.0356 | 1600 | 2.1517 | 0.5527 | | 0.3674 | 1.1003 | 1700 | 2.5025 | 0.5642 | | 0.6865 | 1.1650 | 1800 | 2.0337 | 0.5654 | | 0.3911 | 1.2298 | 1900 | 2.1313 | 0.5852 | | 0.6468 | 1.2945 | 2000 | 2.4708 | 0.5689 | | 0.4501 | 1.3592 | 2100 | 2.2926 | 0.5911 | | 0.2603 | 1.4239 | 2200 | 2.3958 | 0.6026 | | 0.327 | 1.4887 | 2300 | 2.0308 | 0.5963 | | 0.5728 | 1.5534 | 2400 | 2.5120 | 0.5860 | | 0.3978 | 1.6181 | 2500 | 2.2434 | 0.6086 | | 0.3691 | 1.6828 | 2600 | 2.2805 | 0.6208 | | 0.443 | 1.7476 | 2700 | 2.2768 | 0.6097 | | 0.5276 | 1.8123 | 2800 | 2.2888 | 0.6193 | | 0.3785 | 1.8770 | 2900 | 2.3886 | 0.5983 | | 0.5051 | 1.9417 | 3000 | 2.3341 | 0.5975 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
Fadri/results
Fadri
2025-05-21T14:06:06Z
0
0
transformers
[ "transformers", "safetensors", "vit", "image-classification", "fungi", "mushrooms", "generated_from_trainer", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2025-05-21T12:45:35Z
--- library_name: transformers license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - image-classification - fungi - mushrooms - generated_from_trainer metrics: - accuracy model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the cifar10 dataset. It achieves the following results on the evaluation set: - Loss: 0.0583 - Accuracy: 0.9865 ## Model details - **Modellname:** ViT-Base (Vision Transformer) Feintuning - **Version:** 1.0 - **Autoren:** Fadri - **Datum:** 2025-05-21 - **Framework:** PyTorch, Transformers (Hugging Face) - **Referenz:** https://huggingface.co/Fadri/results ## Model description Dieses Modell ist ein Vision Transformer (ViT-Base), der auf das CIFAR-10-Datenset feingetunt wurde. CIFAR-10 umfasst 10 Klassen von Farb-Bildern (Airplane, Automobile, Bird, Cat, Deer, Dog, Frog, Horse, Ship, Truck). Der Feintuning-Prozess nutzte augmentierte Trainingsbilder, um Robustheit gegenüber Varianz in Beleuchtung, Rotation und Skalierung zu erhöhen. ## Training & Evaluierung - **Datensatz:** CIFAR-10 (50.000 Trainings-, 10.000 Test-Bilder) - **Datenquelle & Lizenz:** - Download von der offiziellen Website der Canadian Institute for Advanced Research (CIFAR). - Lizenz: MIT (frei verfügbar für Forschung und Lehre). - **Datenaufteilung:** 45.000 Trainings-, 5.000 Validierungs-, 10.000 Test-Bilder - **Augmentation:** - Zufällige horizontale Spiegelung - Rotation ±15° - Skalierung 0.8–1.2 - Farb- und Kontrast-Jitter - **Hyperparameter:** - Lernrate: 3e-5 mit Warmup (5 % Steps) und linearem Decay - Batch-Size: 64 (Train), 128 (Validation) - Optimizer: AdamW (β₁=0.9, β₂=0.999, ε=1e-8) - Epochs: 20 - **Hardware:** NVIDIA Tesla V100, 16 GB VRAM - **Ergebnisse:** - Trainingsverlust: 0.0583 - Validierungs-Accuracy: 98.65 % - Test-Accuracy: 98.45 % ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0388 | 1.0 | 1563 | 0.0732 | 0.9815 | | 0.017 | 2.0 | 3126 | 0.0621 | 0.9847 | | 0.0028 | 3.0 | 4689 | 0.0583 | 0.9865 | ### Framework versions - Transformers 4.52.1 - Pytorch 2.7.0+cu118 - Datasets 3.6.0 - Tokenizers 0.21.1 ## Zero-Shot-Baseline Zur Einordnung der Feintuning-Leistung wurde ein CLIP-ResNet50-Modell (OpenAI CLIP) ohne zusätzliche Feineinstellung auf dem CIFAR-10-Testset evaluiert: - **Zero-Shot-Test-Accuracy:** 76.2 % | Modell | Test-Accuracy | |---------------------------|---------------| | CLIP-ResNet50 (Zero-Shot) | 76.2 % | | ViT-Base (Feintuning) | 98.45 % | ## Intended Uses - Klassifikation von kleinen Farb-Bildern in 10 Kategorien (z. B. in Lehr- und Forschungsumgebungen). - Demonstration von Feintuning-Prozessen für Transformer-Modelle im Computer-Vision-Bereich. ## Limitations - Nur für Bildgrößen 32×32 px optimiert – nicht direkt auf größere Auflösungen übertragbar ohne zusätzliche Anpassung. - CIFAR-10 ist relativ klein und künstlich; Ergebnisse auf realen, größeren Datensätzen können abweichen. - Fehlklassifizierungen treten bei ähnlichen Klassen (z. B. Katze vs. Hund) auf. ## Training Data - **Quelle:** https://www.cs.toronto.edu/~kriz/cifar.html - **Split:** 45k Train / 5k Val / 10k Test - **Augmentation:**siehe oben - **Vorverarbeitung:** - Normalisierung auf den Mittelwert und die Standardabweichung von CIFAR-10 - Resizing auf 224×224 px (Input-Requirement von ViT) ## Evaluation Data - Unverändertes CIFAR-10-Testset (10 k Bilder), gleiche Vorverarbeitung wie Training. ## Ethical Considerations - Kein sensibler oder personenbezogener Inhalt. - Lizenzkonformität mit MIT-Lizenz. - Modell nicht für kritische Anwendungen (z. B. medizinische Diagnostik) geeignet.
RayYoh/LaSSM
RayYoh
2025-05-21T14:05:26Z
0
0
null
[ "tensorboard", "license:mit", "region:us" ]
null
2025-05-21T13:24:39Z
--- license: mit --- # LaSSM: Efficient Semantic-Spatial Query Decoding via Local Aggregation and State Space Models for 3D Instance Segmentation [[code]](https://github.com/RayYoh/LaSSM) | Model | Benchmark | Num GPUs | mAP | AP50 | AP25 | Config | Tensorboard | Exp Record | Model | | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | LaSSM | ScanNet++ V2 Val | 4 | 29.1 | 43.5 | 51.6 | [Link](https://github.com/RayYoh/LaSSM/blob/main/configs/scannetpp/insseg-lassm-spunet-v2-3.py) | [Link](https://huggingface.co/RayYoh/LaSSM/tensorboard) | [Link](https://huggingface.co/RayYoh/LaSSM/raw/main/scannetpp-lassm-spunet-v2-3/train.log) | [Link](https://huggingface.co/RayYoh/LaSSM/blob/main/scannetpp-lassm-spunet-v2-3/model/model_best.pth) | | LaSSM | ScanNet Val | 4 | 58.4 | 78.1 | 86.1 | [Link](https://github.com/RayYoh/LaSSM/blob/main/configs/scannet/insseg-lassm-spunet-v2-3.py) | - | [Link](https://huggingface.co/RayYoh/LaSSM/raw/main/scannet-lassm-spunet-v2-3/train.log) | [Link](https://huggingface.co/RayYoh/LaSSM/blob/main/scannet-lassm-spunet-v2-3/model/model_best.pth) | | LaSSM | ScanNet200 Val | 4 | 29.3 | 39.2 | 44.5 | [Link](https://github.com/RayYoh/LaSSM/blob/main/configs/scannet200/insseg-lassm-minkunet-3.py) | - | [Link](https://huggingface.co/RayYoh/LaSSM/raw/main/scannet200-lassm-minkunet-3/train.log) | [Link](https://huggingface.co/RayYoh/LaSSM/blob/main/scannet200-lassm-minkunet-3/model/model_best.pth) |
filippo-baglini/biobert_finetuned_ncbi_disease
filippo-baglini
2025-05-21T14:03:47Z
0
0
transformers
[ "transformers", "safetensors", "bert", "token-classification", "generated_from_trainer", "dataset:ncbi_disease", "base_model:dmis-lab/biobert-base-cased-v1.1", "base_model:finetune:dmis-lab/biobert-base-cased-v1.1", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2025-05-21T13:09:26Z
--- library_name: transformers base_model: dmis-lab/biobert-base-cased-v1.1 tags: - generated_from_trainer datasets: - ncbi_disease model-index: - name: biobert_finetuned_ncbi_disease results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # biobert_finetuned_ncbi_disease This model is a fine-tuned version of [dmis-lab/biobert-base-cased-v1.1](https://huggingface.co/dmis-lab/biobert-base-cased-v1.1) on the ncbi_disease dataset. It achieves the following results on the evaluation set: - Loss: 0.0878 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.7803 | 1.0 | 170 | 0.2916 | | 0.1847 | 2.0 | 340 | 0.1073 | | 0.0879 | 3.0 | 510 | 0.0804 | | 0.0565 | 4.0 | 680 | 0.0732 | | 0.0421 | 5.0 | 850 | 0.0759 | | 0.0329 | 6.0 | 1020 | 0.0772 | | 0.0264 | 7.0 | 1190 | 0.0786 | | 0.0215 | 8.0 | 1360 | 0.0788 | | 0.0173 | 9.0 | 1530 | 0.0857 | | 0.015 | 10.0 | 1700 | 0.0878 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
FormlessAI/68fa5dd8-4567-460b-91a7-3a9adbe58b81
FormlessAI
2025-05-21T14:03:03Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "trl", "sft", "conversational", "base_model:MLP-KTLim/llama-3-Korean-Bllossom-8B", "base_model:finetune:MLP-KTLim/llama-3-Korean-Bllossom-8B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-21T11:26:03Z
--- base_model: MLP-KTLim/llama-3-Korean-Bllossom-8B library_name: transformers model_name: 68fa5dd8-4567-460b-91a7-3a9adbe58b81 tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for 68fa5dd8-4567-460b-91a7-3a9adbe58b81 This model is a fine-tuned version of [MLP-KTLim/llama-3-Korean-Bllossom-8B](https://huggingface.co/MLP-KTLim/llama-3-Korean-Bllossom-8B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="FormlessAI/68fa5dd8-4567-460b-91a7-3a9adbe58b81", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/phoenix-formless/Gradients/runs/z46gcfth) This model was trained with SFT. ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.7.0+cu118 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
nguyenvulebinh/audio-seg-diarization
nguyenvulebinh
2025-05-21T14:03:01Z
0
0
transformers
[ "transformers", "safetensors", "PyanNet", "endpoints_compatible", "region:us" ]
null
2025-05-21T12:38:19Z
--- library_name: transformers tags: [] --- # Audio segmentation powered by speaker diarization ```bash git clone https://github.com/nguyenvulebinh/audio-seg-diarization.git cd audio-seg-diarization && pip install -r requirements.txt ``` ```python from src.pyanet.pyanet_model import PyanNet from src.utils import segmentor import torch import torchaudio segmentation_model = PyanNet.from_pretrained("nguyenvulebinh/audio-seg-diarization").eval() if torch.cuda.is_available(): segmentation_model = segmentation_model.cuda() wav_path = "./resource/example.wav" wav, rate = torchaudio.load(wav_path) segments = segmentor(segmentation_model, wav, max_duration=25) # [{'start': 9568.527218750001, 'end': 9572.66159375, 'segments': [(9568.527218750001, 9572.66159375)]}] segments_wavs = [wav[0, int(seg['start'] * rate):int(seg['end'] * rate)] for seg in segments] ```
vermoney/77ac93be-fd8d-48fa-9c90-9bc02c6c866c
vermoney
2025-05-21T14:02:27Z
0
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "mistral", "text-generation", "generated_from_trainer", "axolotl", "dpo", "trl", "unsloth", "conversational", "arxiv:2305.18290", "base_model:unsloth/mistral-7b-instruct-v0.2", "base_model:quantized:unsloth/mistral-7b-instruct-v0.2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-05-21T13:49:53Z
--- base_model: unsloth/mistral-7b-instruct-v0.2 library_name: transformers model_name: 77ac93be-fd8d-48fa-9c90-9bc02c6c866c tags: - generated_from_trainer - axolotl - dpo - trl - unsloth licence: license --- # Model Card for 77ac93be-fd8d-48fa-9c90-9bc02c6c866c This model is a fine-tuned version of [unsloth/mistral-7b-instruct-v0.2](https://huggingface.co/unsloth/mistral-7b-instruct-v0.2). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="vermoney/77ac93be-fd8d-48fa-9c90-9bc02c6c866c", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dedok-yo/s56-9/runs/wnvs0axd) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.12.0.dev0 - Transformers: 4.46.0 - Pytorch: 2.5.0+cu124 - Datasets: 3.0.1 - Tokenizers: 0.20.1 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
silvano315/fine_tuned_model
silvano315
2025-05-21T14:00:59Z
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "en", "dataset:mteb/amazon_reviews_multi", "base_model:cardiffnlp/twitter-roberta-base-sentiment-latest", "base_model:adapter:cardiffnlp/twitter-roberta-base-sentiment-latest", "license:apache-2.0", "region:us" ]
null
2025-05-21T08:27:52Z
--- library_name: peft base_model: cardiffnlp/twitter-roberta-base-sentiment-latest tags: - generated_from_trainer metrics: - accuracy model-index: - name: fine_tuned_model results: [] license: apache-2.0 datasets: - mteb/amazon_reviews_multi language: - en --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fine_tuned_model This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-sentiment-latest](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment-latest) on [mteb/amazon_reviews_multi](https://huggingface.co/datasets/mteb/amazon_reviews_multi). It achieves the following results on the evaluation set: - Loss: 0.4604 - Accuracy: 0.81 - F1 Macro: 0.7564 - Precision Macro: 0.7654 - Recall Macro: 0.7533 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro | Precision Macro | Recall Macro | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|:---------------:|:------------:| | 0.5451 | 1.0 | 5000 | 0.5156 | 0.783 | 0.7111 | 0.7280 | 0.7110 | | 0.4961 | 2.0 | 10000 | 0.4619 | 0.809 | 0.7591 | 0.7647 | 0.7567 | | 0.498 | 3.0 | 15000 | 0.4604 | 0.81 | 0.7564 | 0.7654 | 0.7533 | ### Framework versions - PEFT 0.14.0 - Transformers 4.48.3 - Pytorch 2.5.1+cu124 - Datasets 3.6.0 - Tokenizers 0.21.0
Armandotrsg/qwen-cybersecurity-2.5-7b-merged
Armandotrsg
2025-05-21T13:59:47Z
0
0
transformers
[ "transformers", "pytorch", "qwen2", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-05-21T13:53:06Z
--- base_model: unsloth/qwen2.5-7b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2 - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Armandotrsg - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2.5-7b-unsloth-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
ramekachwaa/q-FrozenLake-v1-4x4-noSlippery
ramekachwaa
2025-05-21T13:59:32Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2025-05-21T13:54:50Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="ramekachwaa/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
emiliensilly/Qwenwiki_sft1_small
emiliensilly
2025-05-21T13:58:10Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "trl", "sft", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-21T09:32:14Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
lmstudio-community/Devstral-Small-2505-GGUF
lmstudio-community
2025-05-21T13:56:47Z
54
12
null
[ "gguf", "text-generation", "base_model:mistralai/Devstral-Small-2505", "base_model:quantized:mistralai/Devstral-Small-2505", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-05-19T22:43:02Z
--- license: apache-2.0 base_model: - mistralai/Devstral-Small-2505 pipeline_tag: text-generation --- ## 💫 Community Model> Devstral Small 2505 by Mistralai *👾 [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)*. **Model creator:** [mistralai](https://huggingface.co/mistralai)<br> **Original model**: [Devstral-Small-2505](https://huggingface.co/mistralai/Devstral-Small-2505)<br> **GGUF quantization:** provided by [mattjcly](https://huggingface.co/mattjcly) based on `llama.cpp` release [b5426](https://github.com/ggerganov/llama.cpp/releases/tag/b5426)<br> ## Technical Details Supports a context length of 131072 tokens. From [mistralai README.md](https://huggingface.co/mistralai/Devstral-Small-2505/blob/38b103a974acdb8861cc8cce6c60f5a477c6abe5/README.md): "Devstral excels at using tools to explore codebases, editing multiple files and power software engineering agents. The model debuts as the #1 open source model on SWE-bench. Despite its compact size of just 24 billion parameters, Devstral outperforms much larger models in agentic coding tasks. These tasks require exploring a codebase and making complex modifications to resolve issues." ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. ## Disclaimers LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
DanielNRU/pollen-ner-400
DanielNRU
2025-05-21T13:56:25Z
6
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:DeepPavlov/rubert-base-cased", "base_model:adapter:DeepPavlov/rubert-base-cased", "region:us" ]
null
2025-05-19T13:47:21Z
--- library_name: peft base_model: DeepPavlov/rubert-base-cased tags: - generated_from_trainer metrics: - precision - recall - f1 model-index: - name: pollen-ner-400 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pollen-ner-400 This model is a fine-tuned version of [DeepPavlov/rubert-base-cased](https://huggingface.co/DeepPavlov/rubert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5198 - Precision: 0.4615 - Recall: 0.5783 - F1: 0.5134 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:| | No log | 1.0 | 50 | 0.7012 | 0.3902 | 0.2068 | 0.2703 | | No log | 2.0 | 100 | 0.6521 | 0.4251 | 0.3193 | 0.3647 | | No log | 3.0 | 150 | 0.6239 | 0.4528 | 0.4337 | 0.4431 | | No log | 4.0 | 200 | 0.5779 | 0.468 | 0.4699 | 0.4689 | | No log | 5.0 | 250 | 0.5643 | 0.4563 | 0.5241 | 0.4879 | | No log | 6.0 | 300 | 0.5509 | 0.4485 | 0.5422 | 0.4909 | | No log | 7.0 | 350 | 0.5305 | 0.4621 | 0.5502 | 0.5023 | | No log | 8.0 | 400 | 0.5256 | 0.4633 | 0.5703 | 0.5113 | | No log | 9.0 | 450 | 0.5217 | 0.4613 | 0.5743 | 0.5116 | | 0.9652 | 10.0 | 500 | 0.5198 | 0.4615 | 0.5783 | 0.5134 | ### Framework versions - PEFT 0.15.2 - Transformers 4.51.3 - Pytorch 2.7.0+cu128 - Datasets 3.5.0 - Tokenizers 0.21.1
yolay/RAIF-Qwen2.5-1.5B
yolay
2025-05-21T13:53:54Z
3
1
null
[ "safetensors", "qwen2", "dataset:yolay/RAIF-ComplexInstruction-Qwen", "license:apache-2.0", "region:us" ]
null
2025-05-20T13:01:08Z
--- license: apache-2.0 datasets: - yolay/RAIF-ComplexInstruction-Qwen --- This model belongs to the official implementation of the paper "Incentivizing Reasoning for Advanced Instruction-Following of Large Language Models". Existing large language models (LLMs) face challenges of following complex instructions, especially when multiple constraints are present and organized in paralleling, chaining, and branching structures. One intuitive solution, namely chain-of-thought (CoT), is expected to universally improve capabilities of LLMs. However, we find that the vanilla CoT exerts a negative impact on performance due to its superficial reasoning pattern of simply paraphrasing the instructions. It fails to peel back the compositions of constraints for identifying their relationship across hierarchies of types and dimensions. To this end, we propose a systematic method to boost LLMs in dealing with complex instructions via incentivizing reasoning for test-time compute scaling. First, we stem from the decomposition of complex instructions under existing taxonomies and propose a reproducible data acquisition method. Second, we exploit reinforcement learning (RL) with verifiable rule-centric reward signals to cultivate reasoning specifically for instruction following. We address the shallow, non-essential nature of reasoning under complex instructions via sample-wise contrast for superior CoT enforcement. We also exploit behavior cloning of experts to facilitate steady distribution shift from fast-thinking LLMs to skillful reasoners. Extensive evaluations on seven comprehensive benchmarks confirm the validity of the proposed method, where a 1.5B LLM achieves 11.74% gains with performance comparable to a 8B LLM. The model Qwen2.5-1.5B is our optimized model for its advanced instruction-following capabilities under complex instructions. It corresponds to the **Qwen2.5-1.5B-Instruct (Ours)** in the Table 1. **Table 1** Performance on seven instruction benchmarks. Best/2nd best are marked **bold**/<u>underlined</u>. | Model | Method | IFEval | CELLO | CF Bench | Complex Bench | FB Bench | Follow Bench | Info Bench | Avg. | |------------------------|----------|--------|-------|----------|--------------|----------|--------------|------------|--------------| | Qwen2.5-1.5B-Instruct | I/O | 45.28 | 71.00 | 36.00 | 50.97 | 39.81 | 40.00 | 71.24 | 50.61 | | Qwen2.5-1.5B-Instruct | CoT | 28.65 | 59.30 | 22.00 | 32.94 | 37.31 | 29.28 | 62.22 | 38.81 (-11.79%) | | Qwen2.5-1.5B-Instruct | SDC | 41.95 | 66.10 | 30.00 | 41.70 | 36.52 | 37.39 | 67.55 | 45.89 (-4.71%) | | Qwen2.5-1.5B-Instruct | SFT | 65.61 | 71.20 | 48.00 | 57.46 | 42.75 | 56.47 | 76.22 | 59.67 (+9.06%) | | Qwen2.5-1.5B-Instruct | Ours | 44.91 | 73.50 | 53.66 | 63.92 | 58.67 | 59.82 | 81.95 | 62.35 (+11.74%) | | DeepSeek-Qwen1.5B | I/O† | 36.04 | 62.50 | 27.99 | 39.89 | 34.51 | 20.29 | 52.00 | 39.03 | | DeepSeek-Qwen1.5B | SFT | 45.29 | 63.20 | 25.33 | 35.53 | 37.59 | 22.18 | 51.96 | 40.15 (+1.12%) | | DeepSeek-Qwen1.5B | Ours | 57.67 | 69.00 | 40.00 | 44.38 | 37.78 | 37.79 | 60.48 | 49.58 (+10.54%) | | DeepScaleR-1.5B | I/O† | 41.77 | 65.00 | 30.00 | 40.70 | 40.24 | 26.01 | 60.31 | 43.43 | | DeepScaleR-1.5B | SFT | 48.24 | 62.90 | 28.00 | 36.68 | 35.72 | 26.50 | 54.22 | 41.75 (-1.67%) | | DeepScaleR-1.5B | Ours | 55.63 | 67.30 | 39.33 | 43.23 | 37.81 | 36.80 | 60.08 | 48.60 (+5.17%) | | Qwen2.5-7B-Instruct | I/O | 72.82 | 76.50 | 64.33 | 74.47 | 59.29 | 75.03 | <u>85.60</u> | <u>72.58</u> | | Qwen2.5-7B-Instruct | CoT | 69.50 | 75.20 | 61.66 | 72.00 | 42.65 | 74.86 | 82.13 | 68.28 (-4.29%) | | Qwen2.5-7B-Instruct | SDC | 60.44 | 72.60 | **65.66**| <u>76.53</u> | <u>60.07</u> | **76.09** | **86.88** | 71.18 (-1.39%) | | Qwen2.5-7B-Instruct | SFT | 72.45 | <u>77.50</u> | 63.33 | 74.23 | 58.76 | 75.92 | 84.31 | 72.36 (-0.21%) | | Qwen2.5-7B-Instruct | Ours | 70.06 | **79.20** | <u>65.00</u> | **77.40** | **64.45** | 75.32 | 82.67 | **73.44** (+0.85%) | | LLaMA3.1-8B-Instruct | I/O | <u>77.63</u> | 75.20 | 56.99 | 69.11 | 46.92 | 53.52 | 71.52 | 67.01 | | LLaMA3.1-8B-Instruct | CoT | 60.44 | 65.50 | 47.66 | 56.54 | 32.34 | 37.36 | 58.48 | 54.53 (-12.48%) | | LLaMA3.1-8B-Instruct | SDC | **80.22** | 71.00 | 58.33 | 68.73 | 38.36 | 48.92 | 72.89 | 65.24 (-1.77%) | | LLaMA3.1-8B-Instruct | SFT | 77.26 | 75.80 | 54.00 | 65.24 | 40.16 | 59.56 | 65.30 | 64.92 (-2.09%) | | LLaMA3.1-8B-Instruct | Ours | 13.49 | 4.6 | 1.33 | 2.71 | 7.14 | 1.08 | 0.51 | 4.06 (-62.95%) | | Ministral-8B-Instruct | I/O | 59.51 | 76.20 | 62.33 | 70.03 | 54.54 | 73.49 | 84.00 | 68.58 | | Ministral-8B-Instruct | CoT | 48.79 | 61.90 | 49.66 | 61.31 | 39.17 | 61.75 | 79.73 | 57.47 (-11.11%) | | Ministral-8B-Instruct | SDC | 58.59 | 63.60 | 56.99 | 68.32 | 48.06 | 69.37 | 84.08 | 64.14 (-4.43%) | | Ministral-8B-Instruct | SFT | 68.57 | 66.30 | 48.66 | 67.20 | 37.26 | 54.37 | 76.62 | 59.85 (-8.72%) | | Ministral-8B-Instruct | Ours | 72.64 | 72.6 | 59.33 | 70.45 | 54.35 | <u>76.08</u> | 75.33 | 68.68 (+0.10%) | | DeepSeek-Qwen7B | I/O† | 60.81 | 72.39 | 57.99 | 66.86 | 59.59 | 62.80 | 79.64 | 65.73 | | DeepSeek-Qwen7B | SFT | 67.09 | 69.10 | 58.66 | 58.42 | 55.60 | 65.96 | 79.15 | 64.85 (-0.88%) | | DeepSeek-Qwen7B | Ours | 71.35 | 71.40 | 58.67 | 62.04 | 59.65 | 59.38 | 82.00 | 66.35 (+0.62%) |
kokovova/dc4e3fb8-2886-4589-a9c1-e33ab012dad8
kokovova
2025-05-21T13:52:53Z
0
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "mistral", "text-generation", "generated_from_trainer", "axolotl", "dpo", "trl", "unsloth", "conversational", "arxiv:2305.18290", "base_model:unsloth/mistral-7b-instruct-v0.2", "base_model:quantized:unsloth/mistral-7b-instruct-v0.2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-05-21T13:38:16Z
--- base_model: unsloth/mistral-7b-instruct-v0.2 library_name: transformers model_name: dc4e3fb8-2886-4589-a9c1-e33ab012dad8 tags: - generated_from_trainer - axolotl - dpo - trl - unsloth licence: license --- # Model Card for dc4e3fb8-2886-4589-a9c1-e33ab012dad8 This model is a fine-tuned version of [unsloth/mistral-7b-instruct-v0.2](https://huggingface.co/unsloth/mistral-7b-instruct-v0.2). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="kokovova/dc4e3fb8-2886-4589-a9c1-e33ab012dad8", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dedok-yo/s56-28/runs/4y5bbamp) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.12.0.dev0 - Transformers: 4.46.0 - Pytorch: 2.5.0+cu124 - Datasets: 3.0.1 - Tokenizers: 0.20.1 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
sergioalves/6bd775c3-70ab-4694-9566-4bd09cd813c3
sergioalves
2025-05-21T13:52:53Z
0
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "mistral", "text-generation", "generated_from_trainer", "axolotl", "dpo", "trl", "unsloth", "conversational", "arxiv:2305.18290", "base_model:unsloth/mistral-7b-instruct-v0.2", "base_model:quantized:unsloth/mistral-7b-instruct-v0.2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-05-21T13:38:16Z
--- base_model: unsloth/mistral-7b-instruct-v0.2 library_name: transformers model_name: 6bd775c3-70ab-4694-9566-4bd09cd813c3 tags: - generated_from_trainer - axolotl - dpo - trl - unsloth licence: license --- # Model Card for 6bd775c3-70ab-4694-9566-4bd09cd813c3 This model is a fine-tuned version of [unsloth/mistral-7b-instruct-v0.2](https://huggingface.co/unsloth/mistral-7b-instruct-v0.2). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="sergioalves/6bd775c3-70ab-4694-9566-4bd09cd813c3", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dedok-yo/s56-7/runs/61yzpola) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.12.0.dev0 - Transformers: 4.46.0 - Pytorch: 2.5.0+cu124 - Datasets: 3.0.1 - Tokenizers: 0.20.1 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
summykai/Qwen3-14B-SFT-ReasoningChem-merged-16bit-run-uff1vvq1
summykai
2025-05-21T13:52:32Z
91
0
null
[ "safetensors", "qwen3", "license:apache-2.0", "region:us" ]
null
2025-05-13T13:51:03Z
--- license: apache-2.0 ---
dimasik87/cb46c897-7a55-4039-8913-fe9dcc46515e
dimasik87
2025-05-21T13:51:29Z
0
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "mistral", "text-generation", "generated_from_trainer", "axolotl", "dpo", "trl", "unsloth", "conversational", "arxiv:2305.18290", "base_model:unsloth/mistral-7b-instruct-v0.2", "base_model:quantized:unsloth/mistral-7b-instruct-v0.2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-05-21T13:38:32Z
--- base_model: unsloth/mistral-7b-instruct-v0.2 library_name: transformers model_name: cb46c897-7a55-4039-8913-fe9dcc46515e tags: - generated_from_trainer - axolotl - dpo - trl - unsloth licence: license --- # Model Card for cb46c897-7a55-4039-8913-fe9dcc46515e This model is a fine-tuned version of [unsloth/mistral-7b-instruct-v0.2](https://huggingface.co/unsloth/mistral-7b-instruct-v0.2). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="dimasik87/cb46c897-7a55-4039-8913-fe9dcc46515e", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dedok-yo/s56-7/runs/zkad1m2c) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.12.0.dev0 - Transformers: 4.46.0 - Pytorch: 2.5.0+cu124 - Datasets: 3.0.1 - Tokenizers: 0.20.1 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
mohhtl/d390b4e5-24a8-4825-b14d-1d7843fee845
mohhtl
2025-05-21T13:49:31Z
0
0
peft
[ "peft", "safetensors", "qwen2", "generated_from_trainer", "dataset:train.json", "base_model:unsloth/Qwen2-1.5B", "base_model:adapter:unsloth/Qwen2-1.5B", "license:apache-2.0", "region:us" ]
null
2025-05-21T13:49:23Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2-1.5B tags: - generated_from_trainer datasets: - train.json model-index: - name: d390b4e5-24a8-4825-b14d-1d7843fee845 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.9.2` ```yaml adapter: lora base_model: unsloth/Qwen2-1.5B bf16: auto dataset_prepared_path: dd656613-2166-41f4-8840-76ceb5e9b641_last_run_prepared datasets: - path: train.json type: field: null field_input: null field_instruction: system field_output: prompt field_system: null format: null no_input_format: null system_format: '{system}' system_prompt: '' flash_attention: null gradient_accumulation_steps: 4 gradient_checkpointing: false learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false logging_steps: 1 lora_alpha: 8 lora_dropout: 0.05 lora_model_dir: null lora_r: 16 lora_target_linear: true lr_scheduler: constant micro_batch_size: 2 model_type: AutoModelForCausalLM num_epochs: 15 optimizer: adamw_bnb_8bit output_dir: d390b4e5-24a8-4825-b14d-1d7843fee845 pad_to_sequence_len: null resume_from_checkpoint: null sample_packing: false save_epochs: 1 save_strategy: 'no' save_total_limit: 1 saves_per_epoch: 1 sequence_len: 2048 special_tokens: null tf32: false tokenizer_type: AutoTokenizer trust_remote_code: true val_set_size: 0.0 wandb_entity: null wandb_log_model: null wandb_name: null wandb_project: null wandb_watch: null warmup_ratio: 0.0 warmup_steps: 0 weight_decay: 0.0 ``` </details><br> # d390b4e5-24a8-4825-b14d-1d7843fee845 This model is a fine-tuned version of [unsloth/Qwen2-1.5B](https://huggingface.co/unsloth/Qwen2-1.5B) on the train.json dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results ### Framework versions - PEFT 0.15.2 - Transformers 4.51.3 - Pytorch 2.4.1+cu124 - Datasets 3.5.1 - Tokenizers 0.21.1
drmoldyn/qwen3-derek-style
drmoldyn
2025-05-21T13:49:12Z
0
0
null
[ "biology", "medical", "base_model:Qwen/Qwen3-32B", "base_model:finetune:Qwen/Qwen3-32B", "region:us" ]
null
2025-05-20T08:08:08Z
--- base_model: - Qwen/Qwen3-32B tags: - biology - medical --- Qwen3-DerekStyle Fine-tuned Model This is a 32B parameter version of Qwen3 fine-tuned on Derek W. Russell's content. Model Description This model builds upon Qwen3-32B, applying customized fine-tuning to better represent Derek's writing style, expertise, and knowledge across topics in medicine, research, and academic writing. Available Formats Original model: qwen3_merged_model.tar.xz - Extract with tar -xf qwen3_merged_model.tar.xz GGUF format (Q6_K): qwentune3-32b-derek-style-q6_k.gguf - Ready to use with llama.cpp, LM Studio, etc. Usage LM Studio Download the GGUF file (qwentune3-32b-derek-style-q6_k.gguf) Open LM Studio Go to Models tab Click + and select the downloaded GGUF file Recommended parameters: Context length: 128k Temperature: 0.7 Top P: 0.95
Detomo/cl-nagoya-sup-simcse-ja-nss-v_1_0_7_5
Detomo
2025-05-21T13:46:43Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:197418", "loss:CategoricalContrastiveLoss", "arxiv:1908.10084", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-05-21T13:46:15Z
--- tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:197418 - loss:CategoricalContrastiveLoss widget: - source_sentence: 科目:コンクリート。名称:普通コンクリート。 sentences: - 科目:コンクリート。名称:多目的ホール浮き床コンクリート。 - 科目:コンクリート。名称:シンダーコンクリート。摘要:FC18N/mm2 スランプ15。備考:代価表 0038。 - 科目:コンクリート。名称:均しコンクリート。 - source_sentence: 科目:コンクリート。名称:コンクリート打設。 sentences: - 科目:コンクリート。名称:普通コンクリート。摘要:JIS A5308 FC30+ΔS(構造体補正)S18 粗骨材20高性能AE減水剤。備考:刊-コン 3018K免震層上部コン。 - 科目:コンクリート。名称:多目的ホール間柱基礎コンクリート。摘要:FC21N/mm2 スランプ18。備考:代価表 0041。 - 科目:コンクリート。名称:コンクリート打設手間。 - source_sentence: 科目:コンクリート。名称:土間コンクリート。 sentences: - 科目:コンクリート。名称:擁壁部コンクリート打設手間。 - 科目:タイル。名称:床タイルK。 - 科目:コンクリート。名称:土間コンクリート。摘要:FC18N/mm2 スランプ15。備考:代価表 0039。 - source_sentence: 科目:コンクリート。名称:基礎部コンクリート打設手間。 sentences: - 科目:コンクリート。名称:普通コンクリート。摘要:JIS A5308 FC33+ΔS(構造体補正)S15粗骨材20高性能AE減水剤・防水剤入。備考:刊-コン 3315KB基礎部コン。 - 科目:コンクリート。名称:機械基礎コンクリート。摘要:Fc24 S18粗骨材20。備考:代価表 0123。 - 科目:コンクリート。名称:土間コンクリート。 - source_sentence: 科目:コンクリート。名称:基礎部マスコンクリート。 sentences: - 科目:コンクリート。名称:オイルタンク基礎コンクリート。摘要:FC24 S18粗骨材20 高性能AE減水剤。備考:代価表 0108。 - 科目:タイル。名称:階段段鼻ノンスリップ役物タイル。 - 科目:コンクリート。名称:普通コンクリート。摘要:FC=24 S15粗骨材基礎部。備考:代価表 0054。 pipeline_tag: sentence-similarity library_name: sentence-transformers --- # SentenceTransformer This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer <!-- - **Base model:** [Unknown](https://huggingface.co/unknown) --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 768 dimensions - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("Detomo/cl-nagoya-sup-simcse-ja-nss-v_1_0_7_5") # Run inference sentences = [ '科目:コンクリート。名称:基礎部マスコンクリート。', '科目:コンクリート。名称:オイルタンク基礎コンクリート。摘要:FC24 S18粗骨材20 高性能AE減水剤。備考:代価表 0108。', '科目:コンクリート。名称:普通コンクリート。摘要:FC=24 S15粗骨材基礎部。備考:代価表 0054。', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 197,418 training samples * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>label</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | label | |:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------| | type | string | string | int | | details | <ul><li>min: 11 tokens</li><li>mean: 13.71 tokens</li><li>max: 19 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 31.5 tokens</li><li>max: 72 tokens</li></ul> | <ul><li>0: ~61.50%</li><li>1: ~5.60%</li><li>2: ~32.90%</li></ul> | * Samples: | sentence1 | sentence2 | label | |:-----------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------|:---------------| | <code>科目:コンクリート。名称:コンクリートポンプ圧送。</code> | <code>科目:コンクリート。名称:ポンプ圧送。</code> | <code>1</code> | | <code>科目:コンクリート。名称:コンクリートポンプ圧送。</code> | <code>科目:コンクリート。名称:コンクリートポンプ圧送。摘要:100m3/回以上基本料金別途加算。備考:B0-434226 No.1 市場捨てコン。</code> | <code>0</code> | | <code>科目:コンクリート。名称:コンクリートポンプ圧送。</code> | <code>科目:コンクリート。名称:コンクリート打設手間。摘要:躯体 ポンプ打設100m3/回以上 S15~S18標準階高 圧送費、基本料別途。備考:B0-434215 No.1 市場地上部コン(1F)。</code> | <code>0</code> | * Loss: <code>sentence_transformer_lib.categorical_constrastive_loss.CategoricalContrastiveLoss</code> ### Training Hyperparameters #### Non-Default Hyperparameters - `per_device_train_batch_size`: 256 - `per_device_eval_batch_size`: 256 - `learning_rate`: 1e-05 - `weight_decay`: 0.01 - `num_train_epochs`: 20 - `warmup_ratio`: 0.2 - `fp16`: True #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: no - `prediction_loss_only`: True - `per_device_train_batch_size`: 256 - `per_device_eval_batch_size`: 256 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 1e-05 - `weight_decay`: 0.01 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 20 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.2 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `tp_size`: 0 - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | |:------:|:----:|:-------------:| | 0.0648 | 50 | 0.2993 | | 0.1295 | 100 | 0.1925 | | 0.1943 | 150 | 0.1197 | | 0.2591 | 200 | 0.1054 | | 0.3238 | 250 | 0.0849 | | 0.3886 | 300 | 0.0854 | | 0.4534 | 350 | 0.0716 | | 0.5181 | 400 | 0.0659 | | 0.5829 | 450 | 0.0641 | | 0.6477 | 500 | 0.0641 | | 0.7124 | 550 | 0.0619 | | 0.7772 | 600 | 0.0589 | | 0.8420 | 650 | 0.0564 | | 0.9067 | 700 | 0.0506 | | 0.9715 | 750 | 0.0513 | | 1.0363 | 800 | 0.0473 | | 1.1010 | 850 | 0.0451 | | 1.1658 | 900 | 0.044 | | 1.2306 | 950 | 0.0418 | | 1.2953 | 1000 | 0.042 | | 1.3601 | 1050 | 0.0337 | | 1.4249 | 1100 | 0.0337 | | 1.4896 | 1150 | 0.0354 | | 1.5544 | 1200 | 0.0353 | | 1.6192 | 1250 | 0.0353 | | 1.6839 | 1300 | 0.0323 | | 1.7487 | 1350 | 0.0297 | | 1.8135 | 1400 | 0.0331 | | 1.8782 | 1450 | 0.0303 | | 1.9430 | 1500 | 0.0286 | | 2.0078 | 1550 | 0.0265 | | 2.0725 | 1600 | 0.0257 | | 2.1373 | 1650 | 0.0195 | | 2.2021 | 1700 | 0.0225 | | 2.2668 | 1750 | 0.0206 | | 2.3316 | 1800 | 0.0231 | | 2.3964 | 1850 | 0.0225 | | 2.4611 | 1900 | 0.0203 | | 2.5259 | 1950 | 0.0207 | | 2.5907 | 2000 | 0.02 | | 2.6554 | 2050 | 0.0181 | | 2.7202 | 2100 | 0.0202 | | 2.7850 | 2150 | 0.0187 | | 2.8497 | 2200 | 0.0192 | | 2.9145 | 2250 | 0.0168 | | 2.9793 | 2300 | 0.0162 | | 3.0440 | 2350 | 0.0159 | | 3.1088 | 2400 | 0.0145 | | 3.1736 | 2450 | 0.0134 | | 3.2383 | 2500 | 0.0138 | | 3.3031 | 2550 | 0.0125 | | 3.3679 | 2600 | 0.0132 | | 3.4326 | 2650 | 0.0122 | | 3.4974 | 2700 | 0.0133 | | 3.5622 | 2750 | 0.0127 | | 3.6269 | 2800 | 0.0125 | | 3.6917 | 2850 | 0.0107 | | 3.7565 | 2900 | 0.0114 | | 3.8212 | 2950 | 0.0104 | | 3.8860 | 3000 | 0.0107 | | 3.9508 | 3050 | 0.0112 | | 4.0155 | 3100 | 0.0084 | | 4.0803 | 3150 | 0.0086 | | 4.1451 | 3200 | 0.0077 | | 4.2098 | 3250 | 0.0098 | | 4.2746 | 3300 | 0.0068 | | 4.3394 | 3350 | 0.0082 | | 4.4041 | 3400 | 0.0064 | | 4.4689 | 3450 | 0.0083 | | 4.5337 | 3500 | 0.0065 | | 4.5984 | 3550 | 0.0067 | | 4.6632 | 3600 | 0.0074 | | 4.7280 | 3650 | 0.0078 | | 4.7927 | 3700 | 0.0072 | | 4.8575 | 3750 | 0.0077 | | 4.9223 | 3800 | 0.007 | | 4.9870 | 3850 | 0.0067 | | 5.0518 | 3900 | 0.0057 | | 5.1166 | 3950 | 0.0054 | | 5.1813 | 4000 | 0.0046 | ### Framework Versions - Python: 3.11.12 - Sentence Transformers: 4.1.0 - Transformers: 4.51.3 - PyTorch: 2.6.0+cu124 - Accelerate: 1.6.0 - Datasets: 2.14.4 - Tokenizers: 0.21.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
DanielNRU/pollen-ner-300
DanielNRU
2025-05-21T13:46:41Z
7
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:DeepPavlov/rubert-base-cased", "base_model:adapter:DeepPavlov/rubert-base-cased", "region:us" ]
null
2025-05-19T13:45:40Z
--- library_name: peft base_model: DeepPavlov/rubert-base-cased tags: - generated_from_trainer metrics: - precision - recall - f1 model-index: - name: pollen-ner-300 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pollen-ner-300 This model is a fine-tuned version of [DeepPavlov/rubert-base-cased](https://huggingface.co/DeepPavlov/rubert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0466 - Precision: 0.0 - Recall: 0.0 - F1: 0.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:| | No log | 1.0 | 38 | 1.0466 | 0.0 | 0.0 | 0.0 | | No log | 2.0 | 76 | 1.0066 | 0.0 | 0.0 | 0.0 | | No log | 3.0 | 114 | 0.9747 | 0.0 | 0.0 | 0.0 | ### Framework versions - PEFT 0.15.2 - Transformers 4.51.3 - Pytorch 2.7.0+cu128 - Datasets 3.5.0 - Tokenizers 0.21.1
howdytherefolks/pic
howdytherefolks
2025-05-21T13:45:58Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-05-21T13:21:20Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: pic --- # Pic <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `pic` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "pic", "lora_weights": "https://huggingface.co/howdytherefolks/pic/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('howdytherefolks/pic', weight_name='lora.safetensors') image = pipeline('pic').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/howdytherefolks/pic/discussions) to add images that show off what you’ve made with this LoRA.
DanielNRU/pollen-ner-250
DanielNRU
2025-05-21T13:45:03Z
7
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:DeepPavlov/rubert-base-cased", "base_model:adapter:DeepPavlov/rubert-base-cased", "region:us" ]
null
2025-05-20T09:35:25Z
--- library_name: peft base_model: DeepPavlov/rubert-base-cased tags: - generated_from_trainer metrics: - precision - recall - f1 model-index: - name: pollen-ner-250 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pollen-ner-250 This model is a fine-tuned version of [DeepPavlov/rubert-base-cased](https://huggingface.co/DeepPavlov/rubert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0743 - Precision: 0.0 - Recall: 0.0 - F1: 0.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:| | No log | 1.0 | 32 | 1.0743 | 0.0 | 0.0 | 0.0 | | No log | 2.0 | 64 | 1.0481 | 0.0 | 0.0 | 0.0 | | No log | 3.0 | 96 | 1.0147 | 0.0 | 0.0 | 0.0 | ### Framework versions - PEFT 0.15.2 - Transformers 4.51.3 - Pytorch 2.7.0+cu128 - Datasets 3.5.0 - Tokenizers 0.21.1
DanielNRU/pollen-ner-200
DanielNRU
2025-05-21T13:43:41Z
7
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:DeepPavlov/rubert-base-cased", "base_model:adapter:DeepPavlov/rubert-base-cased", "region:us" ]
null
2025-05-19T13:44:26Z
--- library_name: peft base_model: DeepPavlov/rubert-base-cased tags: - generated_from_trainer metrics: - precision - recall - f1 model-index: - name: pollen-ner-200 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pollen-ner-200 This model is a fine-tuned version of [DeepPavlov/rubert-base-cased](https://huggingface.co/DeepPavlov/rubert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2554 - Precision: 0.0 - Recall: 0.0 - F1: 0.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:| | No log | 1.0 | 25 | 1.2554 | 0.0 | 0.0 | 0.0 | | No log | 2.0 | 50 | 1.0955 | 0.0 | 0.0 | 0.0 | | No log | 3.0 | 75 | 1.0675 | 0.0 | 0.0 | 0.0 | ### Framework versions - PEFT 0.15.2 - Transformers 4.51.3 - Pytorch 2.7.0+cu128 - Datasets 3.5.0 - Tokenizers 0.21.1
DanielNRU/pollen-ner-150
DanielNRU
2025-05-21T13:42:22Z
4
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:DeepPavlov/rubert-base-cased", "base_model:adapter:DeepPavlov/rubert-base-cased", "region:us" ]
null
2025-05-20T09:33:11Z
--- library_name: peft base_model: DeepPavlov/rubert-base-cased tags: - generated_from_trainer metrics: - precision - recall - f1 model-index: - name: pollen-ner-150 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pollen-ner-150 This model is a fine-tuned version of [DeepPavlov/rubert-base-cased](https://huggingface.co/DeepPavlov/rubert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7104 - Precision: 0.0045 - Recall: 0.0020 - F1: 0.0028 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:| | No log | 1.0 | 19 | 1.7104 | 0.0045 | 0.0020 | 0.0028 | | No log | 2.0 | 38 | 1.3578 | 0.0 | 0.0 | 0.0 | | No log | 3.0 | 57 | 1.1513 | 0.0 | 0.0 | 0.0 | ### Framework versions - PEFT 0.15.2 - Transformers 4.51.3 - Pytorch 2.7.0+cu128 - Datasets 3.5.0 - Tokenizers 0.21.1
DanielNRU/pollen-ner-50
DanielNRU
2025-05-21T13:40:23Z
5
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:DeepPavlov/rubert-base-cased", "base_model:adapter:DeepPavlov/rubert-base-cased", "region:us" ]
null
2025-05-20T09:31:35Z
--- library_name: peft base_model: DeepPavlov/rubert-base-cased tags: - generated_from_trainer metrics: - precision - recall - f1 model-index: - name: pollen-ner-50 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pollen-ner-50 This model is a fine-tuned version of [DeepPavlov/rubert-base-cased](https://huggingface.co/DeepPavlov/rubert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.3267 - Precision: 0.0104 - Recall: 0.0482 - F1: 0.0172 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:| | No log | 1.0 | 7 | 2.3267 | 0.0104 | 0.0482 | 0.0172 | | No log | 2.0 | 14 | 2.2056 | 0.0084 | 0.0321 | 0.0133 | | No log | 3.0 | 21 | 2.0866 | 0.008 | 0.0221 | 0.0117 | ### Framework versions - PEFT 0.15.2 - Transformers 4.51.3 - Pytorch 2.7.0+cu128 - Datasets 3.5.0 - Tokenizers 0.21.1
lmstudio-community/Llama-3.1-Nemotron-Nano-4B-v1.1-GGUF
lmstudio-community
2025-05-21T13:39:07Z
0
0
null
[ "gguf", "nvidia", "llama-3", "text-generation", "en", "dataset:nvidia/Llama-Nemotron-Post-Training-Dataset", "base_model:nvidia/Llama-3.1-Nemotron-Nano-4B-v1.1", "base_model:quantized:nvidia/Llama-3.1-Nemotron-Nano-4B-v1.1", "license:other", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-05-20T21:30:55Z
--- quantized_by: bartowski pipeline_tag: text-generation base_model: nvidia/Llama-3.1-Nemotron-Nano-4B-v1.1 license_name: nvidia-open-model-license language: - en datasets: - nvidia/Llama-Nemotron-Post-Training-Dataset tags: - nvidia - llama-3 license: other license_link: https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/ base_model_relation: quantized --- ## 💫 Community Model> Llama 3.1 Nemotron Nano 4B v1.1 by Nvidia *👾 [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)*. **Model creator:** [nvidia](https://huggingface.co/nvidia)<br> **Original model**: [Llama-3.1-Nemotron-Nano-4B-v1.1](https://huggingface.co/nvidia/Llama-3.1-Nemotron-Nano-4B-v1.1)<br> **GGUF quantization:** provided by [bartowski](https://huggingface.co/bartowski) based on `llama.cpp` release [b5432](https://github.com/ggerganov/llama.cpp/releases/tag/b5432)<br> ## Technical Details Supports a context length of 128k tokens Created from Llama 3.1 8B with pruning and distilling Tuned for reasoning, human chat preferences, and tasks, such as RAG and tool calling. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. ## Disclaimers LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
winnienlp/sst2-model-raw
winnienlp
2025-05-21T13:37:09Z
0
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-05-21T13:24:21Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
yakupmrt7/blip2-fine-tuned-2
yakupmrt7
2025-05-21T13:35:39Z
0
0
transformers
[ "transformers", "safetensors", "blip-2", "visual-question-answering", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
visual-question-answering
2025-05-21T13:30:32Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Viral-Kulhad-Pizza-Couple-Video/FULL.VIDEO.LINK.Kulhad.Pizza.Viral.Video.Leaks.Official
Viral-Kulhad-Pizza-Couple-Video
2025-05-21T13:34:30Z
0
0
null
[ "region:us" ]
null
2025-05-21T13:32:05Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> Kulhad Pizza new viral video trending! This latest food sensation is capturing millions of views online. Discover what makes Kulhad Pizza special and why food lovers are going crazy over this unique dish. Watch the latest viral video and explore the story behind this trending street food. Learn how it’s made, where to get it, and why it’s a must-try in 2025. Stay updated with the newest food trends and viral videos. Click now to watch and share!
shah-sapna-video-link-4k/Full.redeem.craze.com.shah.sapna.viral.video.starcaptions.com.apk8d.redeem.craze.link
shah-sapna-video-link-4k
2025-05-21T13:34:05Z
0
0
null
[ "region:us" ]
null
2025-05-21T13:33:14Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/fn84hrnu?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
Robot2050/Summary_Rewrite
Robot2050
2025-05-21T13:32:11Z
0
0
null
[ "arxiv:2410.12788", "license:apache-2.0", "region:us" ]
null
2025-05-21T09:31:29Z
--- license: apache-2.0 --- <h1 align="center"> Meta-Chunking: Learning Text Segmentation and Semantic Completion via Logical Perception </h1> <p align="center"> <a href="https://arxiv.org/abs/2410.12788"> <img alt="arXiv Paper" src="https://img.shields.io/badge/arXiv-Paper-b31b1b.svg?logo=arxiv"> </a> <a href="https://huggingface.co/papers/2410.12788"> <img src="https://img.shields.io/badge/Huggingface-Paper-yellow?style=flat-square&logo=huggingface"> </a> <a href="https://opensource.org/license/apache-2-0"> <img alt="Apache 2.0 License" src="https://img.shields.io/badge/License-Apache_2.0-4285f4.svg?logo=apache"> </a> </p> The summary and rewrite models were fully fine-tuned on the Qwen2.5-3B-Instruct utilizing 20K data entries from the CRUD benchmark, which was prepared with ERNIE-3.5-128K and QwQ-32B.
johnjeanc/OpenRS-GRPO
johnjeanc
2025-05-21T13:29:11Z
10
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "open-r1", "trl", "grpo", "conversational", "dataset:knoveleng/open-rs", "arxiv:2402.03300", "base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B", "base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-09T15:48:21Z
--- base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B datasets: knoveleng/open-rs library_name: transformers model_name: OpenRS-GRPO tags: - generated_from_trainer - open-r1 - trl - grpo licence: license --- # Model Card for OpenRS-GRPO This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) on the [knoveleng/open-rs](https://huggingface.co/datasets/knoveleng/open-rs) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="johnjeanc/OpenRS-GRPO", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/112703024-national-chengchi-university/huggingface/runs/owr3hkm6) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.16.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
concept-unlearning/Qwen2.5-7B_npo_gdr_lora_wmdp_bio_v3
concept-unlearning
2025-05-21T13:27:44Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-21T13:25:25Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MinaMila/llama_instbase_unlearned_ug_e-6_1.0_0.25_0.5_ep3_LoRa_ACSEmployment_2_cfda_ep4_22
MinaMila
2025-05-21T13:25:05Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-21T13:25:01Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
TIGER-Lab/General-Reasoner-Qwen2.5-7B
TIGER-Lab
2025-05-21T13:21:53Z
11,275
2
null
[ "safetensors", "qwen2", "General-Reasoner-7B", "zho", "eng", "fra", "spa", "por", "deu", "ita", "rus", "jpn", "kor", "vie", "tha", "ara", "arxiv:2505.14652", "base_model:Qwen/Qwen2.5-7B", "base_model:finetune:Qwen/Qwen2.5-7B", "license:apache-2.0", "region:us" ]
null
2025-04-06T20:32:08Z
--- license: apache-2.0 language: - zho - eng - fra - spa - por - deu - ita - rus - jpn - kor - vie - tha - ara base_model: - Qwen/Qwen2.5-7B tags: - General-Reasoner-7B --- # General-Reasoner: Advancing LLM Reasoning Across All Domains <p align="center"> <a href="https://github.com/TIGER-AI-Lab/General-Reasoner" target="_blank">💻 Code</a> | <a href="https://arxiv.org/abs/2505.14652" target="_blank">📄 Paper</a> | <a href="https://huggingface.co/datasets/TIGER-Lab/WebInstruct-verified" target="_blank">📊 Dataset</a> | <a href="https://huggingface.co/collections/TIGER-Lab/general-reasoner-67fe9386e43e046489eac013" target="_blank">🤗 Model</a> | <a href="https://tiger-ai-lab.github.io/General-Reasoner/" target="_blank">🌐 Project Page</a> </p> ## Overview <p align="center"> <img src="https://tiger-ai-lab.github.io/General-Reasoner/static/images/teaser.png" alt="General-Reasoner Teaser" width="650"/> </p> <p align="center" style="font-style: italic; font-size: 0.95rem;"> <em> Figure: Effectiveness of <strong>General-Reasoner</strong> trained with diverse verifiable reasoning questions using model-based verifier compared to baseline methods on various reasoning tasks. </em> </p> **General-Reasoner** is a training paradigm for large language models (LLMs), designed to robustly enhance reasoning abilities across diverse domains—not just mathematics and coding, but also physics, chemistry, finance, humanities, and more. **Key features:** - **Zero RL Training:** Direct reinforcement learning from base LLMs, bypassing intermediate supervised stages. - **Diverse Reasoning Data:** 230K+ high-quality, verifiable questions sourced from the web and filtered for answer verifiability across disciplines. - **Model-Based Verifier:** Compact 1.5B generative verifier model for context-aware, chain-of-thought answer validation, outperforming traditional rule-based methods. **This specific model is the General-Reasoner variant trained based on [Qwen2.5-7B-Base](https://huggingface.co/Qwen/Qwen2.5-7B).** ## Main Results General-Reasoner outperforms base and supervised models on a variety of reasoning benchmarks, demonstrating robust generalization across domains: <p align="center"> <a href="https://github.com/TIGER-AI-Lab/General-Reasoner/raw/refs/heads/gh-pages/static/images/results_general.png" target="_blank"> <img src="https://github.com/TIGER-AI-Lab/General-Reasoner/raw/refs/heads/gh-pages/static/images/results_general.png" alt="Main Results" width="600"> </a> </p> ## Citation If you feel our work is helpful, please cite: ```bibtex @article{general-reasoner, title={{G}eneral-{R}easoner: Advancing LLM Reasoning Across All Domains}, author={Xueguang Ma and Qian Liu and Dongfu Jiang and Ge Zhang and Zejun Ma and Wenhu Chen}, year={2025}, journal={arXiv:2505.14652}, url={https://arxiv.org/abs/2505.14652} } ```
axelbellec/synapse-med-llama-3.2-3b-instruct-lora
axelbellec
2025-05-21T13:21:39Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "llama-v3p2-3b-instruct", "en", "base_model:meta-llama/Llama-3.2-3B-Instruct", "base_model:finetune:meta-llama/Llama-3.2-3B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-05-21T13:19:39Z
--- base_model: meta-llama/Llama-3.2-3B-Instruct tags: - text-generation-inference - transformers - llama-v3p2-3b-instruct license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** axelbellec - **License:** apache-2.0 - **Finetuned from model :** meta-llama/Llama-3.2-3B-Instruct [<img src="https://synapse-medicine.notion.site/image/https%3A%2F%2Fs3-us-west-2.amazonaws.com%2Fsecure.notion-static.com%2F6e62ebe4-16bb-4879-842b-269d97235452%2FLogo_Synapse_Blue.png?table=block&id=1b9e0047-cb45-4cc4-9bd5-a5729a1926b6&spaceId=87011edc-ece1-4f38-8a1b-a2f427a3e33c&width=2000&userId=&cache=v2" width="200"/>](https://synapse-medicine.com)
TIGER-Lab/General-Reasoner-Qwen2.5-14B
TIGER-Lab
2025-05-21T13:20:40Z
26
4
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "en", "dataset:TIGER-Lab/VisualWebInstruct-Verified", "arxiv:2505.14652", "base_model:Qwen/Qwen2.5-14B", "base_model:finetune:Qwen/Qwen2.5-14B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-11T19:56:51Z
--- license: apache-2.0 language: - en base_model: - Qwen/Qwen2.5-14B datasets: - TIGER-Lab/VisualWebInstruct-Verified library_name: transformers --- # General-Reasoner: Advancing LLM Reasoning Across All Domains <p align="center"> <a href="https://github.com/TIGER-AI-Lab/General-Reasoner" target="_blank">💻 Code</a> | <a href="https://arxiv.org/abs/2505.14652" target="_blank">📄 Paper</a> | <a href="https://huggingface.co/datasets/TIGER-Lab/WebInstruct-verified" target="_blank">📊 Dataset</a> | <a href="https://huggingface.co/collections/TIGER-Lab/general-reasoner-67fe9386e43e046489eac013" target="_blank">🤗 Model</a> | <a href="https://tiger-ai-lab.github.io/General-Reasoner/" target="_blank">🌐 Project Page</a> </p> ## Overview <p align="center"> <img src="https://tiger-ai-lab.github.io/General-Reasoner/static/images/teaser.png" alt="General-Reasoner Teaser" width="650"/> </p> <p align="center" style="font-style: italic; font-size: 0.95rem;"> <em> Figure: Effectiveness of <strong>General-Reasoner</strong> trained with diverse verifiable reasoning questions using model-based verifier compared to baseline methods on various reasoning tasks. </em> </p> **General-Reasoner** is a training paradigm for large language models (LLMs), designed to robustly enhance reasoning abilities across diverse domains—not just mathematics and coding, but also physics, chemistry, finance, humanities, and more. **Key features:** - **Zero RL Training:** Direct reinforcement learning from base LLMs, bypassing intermediate supervised stages. - **Diverse Reasoning Data:** 230K+ high-quality, verifiable questions sourced from the web and filtered for answer verifiability across disciplines. - **Model-Based Verifier:** Compact 1.5B generative verifier model for context-aware, chain-of-thought answer validation, outperforming traditional rule-based methods. **This specific model is the General-Reasoner variant trained based on [Qwen2.5-14B-Base](https://huggingface.co/Qwen/Qwen2.5-14B).** ## Main Results General-Reasoner outperforms base and supervised models on a variety of reasoning benchmarks, demonstrating robust generalization across domains: <p align="center"> <a href="https://github.com/TIGER-AI-Lab/General-Reasoner/raw/refs/heads/gh-pages/static/images/results_general.png" target="_blank"> <img src="https://github.com/TIGER-AI-Lab/General-Reasoner/raw/refs/heads/gh-pages/static/images/results_general.png" alt="Main Results" width="600"> </a> </p> ## Citation If you feel our work is helpful, please cite: ```bibtex @article{general-reasoner, title={{G}eneral-{R}easoner: Advancing LLM Reasoning Across All Domains}, author={Xueguang Ma and Qian Liu and Dongfu Jiang and Ge Zhang and Zejun Ma and Wenhu Chen}, year={2025}, journal={arXiv:2505.14652}, url={https://arxiv.org/abs/2505.14652} } ```
Darkhn/L3.3-70B-Amalgamma-V4
Darkhn
2025-05-21T13:20:15Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2406.11617", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-21T11:59:47Z
--- base_model: [] library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DELLA](https://arxiv.org/abs/2406.11617) merge method using /media/administrator/oiseauxai1data/modelout/Smart-base-v2 as a base. ### Models Merged The following models were included in the merge: * /media/administrator/oiseauxai1data1/modelout/Middle-Base-V2 * /media/administrator/oiseauxai1data/modelout/Story-Base-V2 * /media/administrator/oiseauxai1data/modelweights/Anathema-V2-LLaMA-70b ### Configuration The following YAML configuration was used to produce this model: ```yaml # --- Mergekit Example: della_linear --- # Method: Implements the DELLA concept (Deep Ensembling with Layer-wise Linear Averaging). # This typically involves a sophisticated layer-wise linear combination of models. base_model: /media/administrator/oiseauxai1data/modelout/Smart-base-v2 # The foundational model models: - model: /media/administrator/oiseauxai1data/modelweights/Anathema-V2-LLaMA-70b parameters: weight: 0.40 # Contribution of this model (e.g., 50%) (can also use a gradiant) [0.1, 0.1, 0.1, 0.2, 0.5] density: 0.80 # Sparsity/pruning factor for this model's contribution. epsilon: 0.1 # Single epsilon for the pruning - model: /media/administrator/oiseauxai1data/modelout/Story-Base-V2 parameters: weight: 0.35 # Contribution of this model (e.g., 50%) (can also use a gradiant) [0.1, 0.1, 0.1, 0.2, 0.5] density: 0.75 # Sparsity/pruning factor for this model's contribution. epsilon: 0.10 # Single epsilon for the pruning - model: /media/administrator/oiseauxai1data1/modelout/Middle-Base-V2 parameters: weight: 0.30 # Contribution of this model (e.g., 50%) (can also use a gradiant) [0.1, 0.1, 0.1, 0.2, 0.5] density: 0.60 # Sparsity/pruning factor for this model's contribution. epsilon: 0.15 # Single epsilon for the pruning model_name: L3.3-70B-Amalgamma-V4 # Name of your merge dtype: float32 # Input size float32, float16, bfloat16 out_dtype: bfloat16 # output size float32, float16, bfloat16 merge_method: della parameters: normalize: false # If true (default), weights are normalized to sum to 1. # If false, absolute weights are used. lambda: 1.1 # Single lambda for scaling the final merged deltas tokenizer_source: base # Or 'base' if base_model is set, or 'union', careful with this one chat_template: llama3 # Template for chat (Chatml, llama3, etc...) license: apache-2.0 # License type ```
MinaMila/llama_instbase_LoRa_ACSEmployment_2_ep4_22
MinaMila
2025-05-21T13:19:20Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-21T13:19:15Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Samas21/Mu51c
Samas21
2025-05-21T13:19:08Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-04-21T02:55:39Z
--- license: apache-2.0 ---
meerkqat/model-mirror
meerkqat
2025-05-21T13:18:34Z
0
0
null
[ "region:us" ]
null
2025-01-29T05:06:54Z
Personal use mirror of CivitAI models for simplifying env setup automation. Model links: - NTR MIX | illustrious-XL | Noob-XL : https://civitai.com/models/926443 - RealismIllustriousByStableYogi : https://civitai.com/models/974693 - RealisticVision : https://civitai.com/models/4201 - WAI-NSFW-illustrious-SDXL : https://civitai.com/models/827184 - ZEP8 : https://civitai.com/models/1067151/zep8 Lora links: - 2ViewsPanties : https://civitai.com/models/1153238/2views-panties - AnatomyHelper : https://civitai.com/models/1171869/anatomy-helper - AshleyGraves : https://civitai.com/models/819229/ashley-graves-the-coffin-of-andy-and-leyley-illustriousxl-lora - CasyTayStyle : https://civitai.com/models/1228487/style-casytay-illustrious-xlnoobai-xl - CharlieMorningstar : https://civitai.com/models/1491650/charlie-morningstar-hazbin-hotel-2-outfits - ContortionSuspendedHanging : https://civitai.com/models/1222948/contortion-suspended-hanging-concept-illustrious - ClosedMouthFullOfCum : https://civitai.com/models/539254/closed-mouth-full-of-cum-lora-or-ponyxl-and-illustrious - CunnilingusWhileStandingOnPenis : https://civitai.com/models/1013415/cunnilingus-while-standing-on-penis - DanglingLegs : https://civitai.com/models/535745/dangling-legs-lora-or-ponyxl-and-illustrious - DarkFireStyle01 : https://civitai.com/models/1233963/darkfirestyle01 - DemonTongue : https://civitai.com/models/1306998/demon-tongues - DreadmirthStyle : https://civitai.com/models/883535/dreadmirth-style-art - FaceInAss : https://civitai.com/models/932951/face-in-ass - FeetAnimeIllustriousXL_v2.5 : https://civitai.com/models/1107767/odor-feetanimeillustriousxl - FemdomSandwichThreesome : https://civitai.com/models/436979/femdom-sandwich-threesome - FlatColor : https://civitai.com/models/1132089/flat-color-style - FootWorship : https://civitai.com/models/946498/pov-footworship-nai-vpred-or-illustrious-or-pony - FullWeightFacesitting : https://civitai.com/models/876630/fullweight-facesitting-part-4 - GamingClothes : https://civitai.com/models/1238948/gaming-clothes - GloryWallLora : https://civitai.com/models/1255358/glory-wall-lora-or-illustrious - HazbinHotelStyle : https://civitai.com/models/1343918/hazbin-hotel-style-il - KissMultipleViewCloseUp : https://civitai.com/models/1256430/kiss-multiple-view-close-up-illustrious - MGEMonsterGirls : https://civitai.com/models/556479/mge-monster-girls - MomoAyase : https://civitai.com/models/942367/dandadan-momo-ayase-pony-illustrious - NipplePenetration : https://civitai.com/models/492887/nipple-penetration-lora-or-ponyxl-and-illustrious - NoseHookLora : https://civitai.com/models/1250664/nose-hook-lora-or-illustrious - PinchingBellyFat : https://civitai.com/models/977343/pinching-belly-fat-one-two-hands - PissDrinkingFemdom : https://civitai.com/models/869192/piss-drinking-femdom-pissing-part-4 - PussyWorshipFemdom : https://civitai.com/models/904315/pussy-worship-femdom - RealisticVaginas_InniePussy1 : https://civitai.com/models/10332/realistic-vaginas-innie-pussy-1 - ReverseFellatio : https://civitai.com/models/543154/reverse-fellatio-lora-or-ponyxl-and-illustrious - ReverseHeadscissor: https://civitai.com/models/1601201/reverse-headscissor?modelVersionId=1811995 - ReverseShrimpSuspension : https://civitai.com/models/680453 - SharpTeethAndFangs : https://civitai.com/models/1063052/sharp-teeth-and-fangs - StealthFellatio : https://civitai.com/models/537491/stealth-fellatio-under-table-lora-or-ponyxl-and-illustrious - StuckInSmallSpace : https://civitai.com/models/1110539/stuck-in-small-space - Tarot : https://civitai.com/models/1117304/tarot-illustrious-concept - TesticleGrab : https://civitai.com/models/1207602 - ThickCum : https://civitai.com/models/1107369/thick-cum-illustrious - ThighSexLora : https://civitai.com/models/634236/thigh-sex-lora-or-ponyxl-and-illustrious - VixionsDetailedPixelArtStyle : https://civitai.com/models/1130279/vixons-illustrious-styles-detailed-pixel-art - VoreStyleLoraVoraciousMoga : https://civitai.com/models/1089819 - XRayGlasses : https://civitai.com/models/177451 Other links: - Seljak wildcards : https://civitai.com/models/1138197/7000-sexy-prompts-snitched-from-seljak
taguser/kantra-epoch3-2025-May-21
taguser
2025-05-21T13:17:50Z
0
0
peft
[ "peft", "safetensors", "llama-factory", "lora", "generated_from_trainer", "base_model:Qwen/Qwen2.5-Coder-14B-Instruct", "base_model:adapter:Qwen/Qwen2.5-Coder-14B-Instruct", "license:other", "region:us" ]
null
2025-05-21T13:17:29Z
--- library_name: peft license: other base_model: Qwen/Qwen2.5-Coder-14B-Instruct tags: - llama-factory - lora - generated_from_trainer model-index: - name: test results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test This model is a fine-tuned version of [Qwen/Qwen2.5-Coder-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-14B-Instruct) on the training_dataset dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - gradient_accumulation_steps: 16 - total_train_batch_size: 32 - total_eval_batch_size: 16 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results ### Framework versions - PEFT 0.15.1 - Transformers 4.51.0 - Pytorch 2.7.0+cu126 - Datasets 3.5.0 - Tokenizers 0.21.1
New-tutorial-Kulhad-Pizza-Viral-Video/Orginal.Full.Clip.Kulhad.Pizza.Viral.Video.Leaks.Official
New-tutorial-Kulhad-Pizza-Viral-Video
2025-05-21T13:16:01Z
0
0
null
[ "region:us" ]
null
2025-05-21T13:15:29Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> Kulhad Pizza new viral video trending! This latest food sensation is capturing millions of views online. Discover what makes Kulhad Pizza special and why food lovers are going crazy over this unique dish. Watch the latest viral video and explore the story behind this trending street food. Learn how it’s made, where to get it, and why it’s a must-try in 2025. Stay updated with the newest food trends and viral videos. Click now to watch and share!
GregCheap/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-raging_dextrous_robin
GregCheap
2025-05-21T13:15:36Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am raging dextrous robin", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-06T15:38:09Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-raging_dextrous_robin tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am raging dextrous robin - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-raging_dextrous_robin This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="GregCheap/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-raging_dextrous_robin", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
mini1013/master_item_top_ps_flat
mini1013
2025-05-21T13:13:56Z
47
0
setfit
[ "setfit", "safetensors", "xlm-roberta", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:mini1013/master_item_top_ps_flat", "base_model:finetune:mini1013/master_item_top_ps_flat", "model-index", "region:us" ]
text-classification
2025-05-21T07:15:44Z
--- tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer widget: - text: '매핑_카테고리1 : (#M)생활/건강>반려동물>고양이 사료>건식사료 매핑_카테고리2 : MP > Naverstore > 반려동물용품 > 고양이용품 > 사료 > 건식 상품명 : 뉴질랜드 기능성 처방식 고양이 사료 HLD 간 기능 부전 장 개선 사료 5.21kg 옵션명 : 3kg ' - text: '상품명 : [마이펫닥터] 시그니처 포 라이프 시니어 강아지 사료 종합 영양 심장 비뇨 관절 다이어트 장 노견 노령견, 2kg, 1ea 옵션명 : 포 라이프 시니어 2kg ' - text: '매핑_카테고리1 : (#M)홈>🐶강아지 사료🍚>건식사료 매핑_카테고리2 : Naverstore > 반려동물용품 > 강아지용품 > 사료 > 건식 상품명 : 카나간 사료 프리런 치킨 포독 스몰 브리드 2kg 옵션명 : 🦃 하이랜드 피스트(칠면조 꿩)_6kg_5.포켄스 펫디저트 과일퓨레 105g ' - text: '매핑_카테고리2 : Naverstore > 반려동물용품 > 강아지용품 > 간식 > 캔/파우치 옵션명 : 매핑_카테고리1 : (#M)홈>생활/건강>반려동물>강아지 간식>캔/파우치 상품명 : 시저캔 강아지간식 캔 통조림 애견간식 ' - text: '옵션명 : 01=1_XS 상품명 : 원피스 공주 고양이 멜빵 민소매 활 애견 조끼 강아지 드레스 인스 스트리머 투투 스커트 ' metrics: - accuracy pipeline_tag: text-classification library_name: setfit inference: true base_model: mini1013/master_item_top_ps_flat model-index: - name: SetFit with mini1013/master_item_top_ps_flat results: - task: type: text-classification name: Text Classification dataset: name: Unknown type: unknown split: test metrics: - type: accuracy value: 0.8774402903051344 name: Accuracy --- # SetFit with mini1013/master_item_top_ps_flat This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [mini1013/master_item_top_ps_flat](https://huggingface.co/mini1013/master_item_top_ps_flat) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [mini1013/master_item_top_ps_flat](https://huggingface.co/mini1013/master_item_top_ps_flat) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 512 tokens - **Number of Classes:** 124 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 86 | <ul><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>미용/목욕>이발기 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 미용/목욕 > 이발기 상품명 : 리케이 킴라베 327 옐로우 강아지 클리퍼 옵션명 : '</li><li>'옵션명 : 블랙 상품명 : 탑컷 New 전문가용 프로 애견 이발기 YD-9000 신형 클리퍼(YD9000) '</li><li>'옵션명 : 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 미용/목욕 > 이발기 매핑_카테고리1 : (#M)생활/건강>반려동물>미용/목욕>이발기 상품명 : [본사직배] 파워플라이 이발기 SP-20C 강아지 바리깡 반려동물 뽀송하개 '</li></ul> | | 93 | <ul><li>'상품명 : 스텐레스 쌍식기 대(32oz) 옵션명 : 본품 1개 매핑_카테고리1 : (#M)생활/건강>반려동물>식기/급수기>자동급식기 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 강아지용품 > 식기/급수기 > 식기/식탁 '</li><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>식기/급수기>자동급식기 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 식기/급수기 > 자동급식기 상품명 : 고양이 자동 물 디스펜서 먹이 그릇 애완동물 사료 옵션명 : 스테인레스 젠틀맨 그레이 '</li><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>식기/급수기>자동급식기 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 강아지용품 > 식기/급수기 > 자동급식기 상품명 : 자동급식기 강아지 밥그릇+급수기 길 고양이 옵션명 : '</li></ul> | | 85 | <ul><li>'옵션명 : 상품명 : 은은한향이나는 반려동물전용 펫향수 고양이탈취제 매핑_카테고리1 : (#M)생활/건강>반려동물>미용/목욕>에센스/향수/밤 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 강아지용품 > 미용/목욕 > 향수/탈취제 '</li><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>미용/목욕>에센스/향수/밤 옵션명 : 상품명 : 플러쉬퍼피 프로틴 코트밤 강아지밤 고양이/ 강아지 보습 에센스 정전기방지 보습제 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 강아지용품 > 건강/관리용품 > 피부/털관리 '</li><li>'상품명 : Breezy tail (브리지테일) 페토세라 미스트 센서티브 150ml 무향 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 강아지용품 > 건강/관리용품 > 피부/털관리 옵션명 : 고양이 랜덤 사은품 매핑_카테고리1 : (#M)생활/건강>반려동물>미용/목욕>에센스/향수/밤 '</li></ul> | | 28 | <ul><li>'매핑_카테고리1 : 홈>사료 & 습식캔>강아지;홈>사료 & 습식>강아지;(#M)홈>사료&습식>강아지 옵션명 : 상품명 : 로얄캐닌 독 하이포알러제닉 캔 400g [아토피 알러지] 매핑_카테고리2 : Naverstore > 반려동물용품 > 강아지용품 > 사료 > 화식 '</li><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>강아지 사료>습식사료 옵션명 : 매핑_카테고리2 : MP > Naverstore > 반려동물용품 > 강아지용품 > 사료 > 습식 상품명 : 닥터맘마 촉촉사료 실꼬리돔 700g (50g x 14ea) '</li><li>'옵션명 : 매핑_카테고리2 : Naverstore > 반려동물용품 > 강아지용품 > 사료 > 화식 상품명 : 로얄캐닌 처방식 캔- 레날캔 410g 매핑_카테고리1 : (#M)생활/건강>반려동물>강아지 사료>습식사료 '</li></ul> | | 95 | <ul><li>'매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 강아지용품 > 패션용품 > 가방 옵션명 : 상품선택_02강아지가방-M 매핑_카테고리1 : (#M)생활/건강>반려동물>야외용품>가슴줄 상품명 : 강아지슬링백 가방 하네스 산책 유치원 리드줄 하네스 백팩 '</li><li>'옵션명 : 상품명 : 산책 하네스 가슴줄 레드M SET 강아지 조끼 고양이 '</li><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>야외용품>가슴줄 옵션명 : 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 강아지용품 > 이동/산책용품 > 가슴줄 상품명 : [페스룸] 컴포트-X 하네스 2.0 (M) X자형 기도 압박없는 강아지 하네스 가슴줄 목줄 '</li></ul> | | 120 | <ul><li>'옵션명 : 핑크_XL 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 패션용품 > 플리스 상품명 : 애완동물 양털 아리조나 강아지 고양이 뽀글이 매핑_카테고리1 : (#M)생활/건강>반려동물>패션용품>플리스 '</li><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>패션용품>플리스 옵션명 : 아이보리_L 상품명 : [페스룸] 이지 하네스 시어링 집업 & 리쉬 강아지 겨울옷 아우터 패딩 리드줄 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 강아지용품 > 패션용품 > 플리스 '</li><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>패션용품>플리스 옵션명 : 핑크_L 상품명 : 강아지후리스 보아 후리스 조끼 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 패션용품 > 플리스 '</li></ul> | | 65 | <ul><li>'상품명 : 냥템점 고양이 자동 장난감 쥐 놀이 위키드 마우스 옵션명 : 블루 매핑_카테고리1 : (#M)생활/건강>반려동물>고양이 장난감>자동장난감 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 장난감 > 자동장난감 '</li><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>고양이 장난감>자동장난감 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 장난감 > 스크래쳐 상품명 : 야오미 야오미 우드 엘 대형 스크래쳐 [리필] 옵션명 : '</li><li>'옵션명 : 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 장난감 > 자동장난감 상품명 : 펫케어 춤추는가재 (움직이는 로봇 생선 고양이 캣닢 자동 장난감) 매핑_카테고리1 : (#M)생활/건강>반려동물>고양이 장난감>자동장난감 '</li></ul> | | 77 | <ul><li>'매핑_카테고리2 : T200 > Naverstore > 생활/주방용품 > 청소용품 > 유리닦이용품 옵션명 : 상품명 : 랩신 V3 다목적알코올 티슈 50매 x 6개 매핑_카테고리1 : (#M)생활/건강>반려동물>미용/목욕>기타 미용/목욕용품 '</li><li>'매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 미용/목욕 > 기타 미용/목욕용품 매핑_카테고리1 : (#M)생활/건강>반려동물>미용/목욕>기타 미용/목욕용품 상품명 : 애견 미용 강아지 거치대 고정 받침대 셀프 반려 동물 옵션명 : 보라색 '</li><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>미용/목욕>기타 미용/목욕용품 상품명 : 브리지테일 페토세라 보송보솜 80매 (목화순면 ) 옵션명 : 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 강아지용품 > 미용/목욕 > 기타 미용/목욕용품 '</li></ul> | | 106 | <ul><li>'옵션명 : 옐로우_2XL 상품명 : 강아지옷 피너츠 스누피 하프 양면 후리스 조끼 S-3XL '</li><li>'매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 패션용품 > 베스트/조끼 상품명 : 애견의류 강아지잠옷 면 개 조끼 귀여운 애완 동물 복장 고양이 치와와 캠핑독 고양이조끼 옵션명 : 파란_XS 매핑_카테고리1 : (#M)생활/건강>반려동물>패션용품>베스트/조끼 '</li><li>'옵션명 : 많은 꽃 조끼 L 사이즈 등 길이 35cm 상품명 : 겨울 할머니 니트 고양이 강아지 옷 꽃무늬 김장조끼 매핑_카테고리1 : (#M)생활/건강>반려동물>패션용품>베스트/조끼 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 패션용품 > 베스트/조끼 '</li></ul> | | 37 | <ul><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>고양이 간식>수제간식 옵션명 : 새싹보리500g100gx5 매핑_카테고리2 : MP > traverse > Naverstore > 반려동물용품 > 수제간식 상품명 : 고양이 캣그라스 키우기 헤어볼 도우미 고양이풀 수경재배 화분 대용량 '</li><li>'매핑_카테고리2 : MP > Naverstore > 반려동물용품 > 수제간식 상품명 : (오클)더내추럴 호박 고구마칩 300g X5 애견 수제간식 건조 매핑_카테고리1 : (#M)생활/건강>반려동물>고양이 간식>수제간식 옵션명 : '</li><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>고양이 간식>수제간식 매핑_카테고리2 : MP > Naverstore > 반려동물용품 > 수제간식 옵션명 : 상품명 : 출산후 고양이 간식 황태 소고기 스프 6개 특별한 '</li></ul> | | 2 | <ul><li>'상품명 : 네츄럴코어 올리고칩 45g 고구마/당근/단호박/블루베리/황태 옵션명 : 네츄럴코어 올리고칩 45g 블루베리 '</li><li>'상품명 : 밥이보약 DOG 건강쿠키 면역쑥쑥 120g 매핑_카테고리2 : MP > Naverstore > 반려동물용품 > 강아지용품 > 간식 > 비스킷/스낵 매핑_카테고리1 : (#M)생활/건강>반려동물>강아지 간식>비스킷/스낵 옵션명 : '</li><li>'상품명 : 네츄럴코어 올리고칩 대용량 옵션명 : 올리고칩 블루베리 250g 매핑_카테고리1 : (#M)홈>🍖강아지 간식>🔷모음전🔷 매핑_카테고리2 : Naverstore > 반려동물용품 > 강아지용품 > 간식 > 비스킷/스낵 '</li></ul> | | 69 | <ul><li>'상품명 : 퍼피 극세사 강아지계단 2층 매핑_카테고리1 : (#M)생활/건강>반려동물>리빙용품>계단/스텝 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 리빙용품 > 계단/슬라이드 옵션명 : 브라운 '</li><li>'매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 리빙용품 > 계단/슬라이드 매핑_카테고리1 : (#M)생활/건강>반려동물>리빙용품>계단/스텝 옵션명 : 3단계단(그레이) 상품명 : 강아지 수납형 계단 2단 3단 스텝 몽글이 오픈형 계단 '</li><li>'옵션명 : 검은색 6 사다리 80X106X43CM 상품명 : 접이식 애견 미끄럼방지 계단 사다리 휴대용 '</li></ul> | | 81 | <ul><li>'상품명 : 애구애구 강아지 고양이 발톱깍기 혈관이 보여 안전한 LED 발톱깍이 옵션명 : 핑크 '</li><li>'상품명 : 헬로도기 강아지빗 고양이빗 털제거 브러쉬 장모용 옵션명 : 핑크 '</li><li>'옵션명 : 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 강아지용품 > 미용/목욕 > 발톱/발 관리 매핑_카테고리1 : (#M)생활/건강>반려동물>미용/목욕>발톱/발 관리 상품명 : 펫그루밍 발톱가위S(레드) '</li></ul> | | 63 | <ul><li>'매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 장난감 > 낚시/막대 상품명 : 카샤카샤 재롱스틱 참새 고양이낚시대 스틱 장난감 깃털 매핑_카테고리1 : (#M)생활/건강>반려동물>고양이 장난감>낚시/막대 옵션명 : 10. 카샤카샤 붕붕 슈퍼롱 새 [리필] '</li><li>'옵션명 : Assorted Color 48PCS 매핑_카테고리2 : MP > traverse > Naverstore > 반려동물용품 > 고양이용품 > 사료 상품명 : 치와바 48팩 플라스틱 시끄러운 고양이 장난감 공과 벨 키튼 체이스 8종 다양한 색상 크기 Assorted Color 48PCS 매핑_카테고리1 : (#M)생활/건강>반려동물>고양이 장난감>낚시/막대 '</li><li>'옵션명 : With USB_CHINA 상품명 : 스마트 인터랙티브 고양이 컬러 LED 자체 회전 애완동물 공 USB 충전식 자동 액세서리 '</li></ul> | | 34 | <ul><li>'상품명 : Cat Craft 사이잘 & 셔닐 숲 소나무 및 버섯 고양이 스크래칭 포스트 세트 옵션명 : '</li><li>'상품명 : 베이펫 동결건조 촙촙트릿 스팀 닭가슴살 50g 매핑_카테고리2 : MP > Naverstore > 반려동물용품 > 고양이용품 > 간식 > 동결건조 매핑_카테고리1 : (#M)생활/건강>반려동물>고양이 간식>동결건조 간식 옵션명 : '</li><li>'옵션명 : 닭간 120g 상품명 : 동결건조간식 보틀 딸기 25g 강아지 고양이 트릿 허글에프디바이츠 '</li></ul> | | 107 | <ul><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>패션용품>셔츠/블라우스 상품명 : 유치원 강아지 유치원복 고양이 모자 개린이 핸드메이드 명품 애견 S XL 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 패션용품 > 안경/모자 옵션명 : 블루_S '</li><li>'상품명 : 플로럴 러플 나시 블라우스 [DW3MB1230] 옵션명 : 32 그린_SM_02 매핑_카테고리1 : (#M)생활/건강>반려동물>패션용품>셔츠/블라우스 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 강아지용품 > 패션용품 > 셔츠/블라우스 '</li><li>'상품명 : 신축성좋은강아지옷 강아지귀여운옷 강아지 테디 애완 동물 코튼 반려견옷 귀여운강아지 옵션명 : M '</li></ul> | | 18 | <ul><li>'상품명 : Crayon 생분해 배변봉투 풉백 똥츄 케이스 디스펜서 풉백 케이스 디스펜서 똥츄 리필 배변봉투 똥봉투 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 강아지용품 > 이동/산책용품 > 배변봉투/집게 매핑_카테고리1 : (#M)생활/건강>반려동물>강아지 배변용품>배변봉투/집게 옵션명 : 2. 케이스(그린) '</li><li>'옵션명 : 상품명 : [바잇미] 생분해성 웁스백 배변봉투 120매 매핑_카테고리1 : (#M)생활/건강>반려동물>강아지 배변용품>배변봉투/집게 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 강아지용품 > 이동/산책용품 > 배변봉투/집게 '</li><li>'옵션명 : 1. 생분해 배변봉투 상품명 : 생분해 Crayon 배변봉투 풉백 똥츄 디스펜서 케이스 반려동물 똥봉투 매핑_카테고리1 : (#M)생활/건강>반려동물>강아지 배변용품>배변봉투/집게 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 강아지용품 > 이동/산책용품 > 배변봉투/집게 '</li></ul> | | 30 | <ul><li>'옵션명 : 상품명 : 데굴데굴 오뚜기 노즈워크 스낵볼 반려동물 간식볼 장난감 (블루) '</li><li>'상품명 : 강아지 노즈워크 장난감 당근노즈워크 12구 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 강아지용품 > 장난감/훈련 > 노즈워크 매핑_카테고리1 : (#M)생활/건강>반려동물>강아지 장난감/훈련>노즈워크 옵션명 : 완두콩 노즈워크 '</li><li>'상품명 : 티티펫 강아지 당근밭 노즈워크 장난감 6피스 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 강아지용품 > 장난감/훈련 > 노즈워크 매핑_카테고리1 : (#M)생활/건강>반려동물>강아지 장난감/훈련>노즈워크 옵션명 : '</li></ul> | | 87 | <ul><li>'옵션명 : 상품명 : 슈퍼드라이 강아지 S 1P 펫 목욕수건 타월 M 핑크 반려 애견 동물 묘 고양이 견 수건 목욕 '</li><li>'매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 강아지용품 > 패션용품 > 원피스/드레스 옵션명 : 옐로우_XL 상품명 : Os 엔젤 하네스 원피스 강아지옷 옷 여름옷 가을옷 강아지 봄옷 매핑_카테고리1 : (#M)생활/건강>반려동물>미용/목욕>타월/가운 '</li><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>미용/목욕>타월/가운 상품명 : 애견 극세사 부드러운 타올 타월 강아지 펫 수건 옵션명 : 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 미용/목욕 > 타월/가운 '</li></ul> | | 48 | <ul><li>'옵션명 : 뉴트리플랜 고양이 간식 습식캔 흰살참치와 연어 상품명 : 뉴트리플랜 고양이 간식 습식캔 흰살참치와 연어 160g 48캔 매핑_카테고리1 : (#M)생활/건강>반려동물>고양이 배변용품>거름망형화장실 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 배변용품 > 거름망형화장실 '</li><li>'상품명 : 반려묘 배변통 화장실 사막화 방지 모래매트 캐비닛 투명유리 살균 소독 탈취제 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 배변용품 > 거름망형화장실 매핑_카테고리1 : (#M)생활/건강>반려동물>고양이 배변용품>거름망형화장실 옵션명 : 15파운드 이내 그레이(중형) '</li><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>고양이 배변용품>거름망형화장실 옵션명 : 펫팀 고양이 캣그라스 새싹보리 고양이풀 간식 상품명 : 펫팀 고양이 캣그라스 새싹보리 고양이풀 간식 캔 화분 식물키우기 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 배변용품 > 거름망형화장실 '</li></ul> | | 49 | <ul><li>'매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 배변용품 > 매트/발판 옵션명 : 매핑_카테고리1 : (#M)생활/건강>반려동물>고양이 배변용품>매트/발판 상품명 : 정글몬스터 쏙쏙 고양이 모래 매트 '</li><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>고양이 배변용품>매트/발판 옵션명 : 상품명 : 벌집 사각매트 1p 화장실매트 사막화방지 모래매트 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 배변용품 > 매트/발판 '</li><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>고양이 배변용품>매트/발판 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 배변용품 > 매트/발판 옵션명 : Gray_S 35CM 상품명 : 반려견매트 애견매트 폭발적인 직조 고양이 귀 코튼 로프 접합 귀여운 논슬립매트 먼지없는 '</li></ul> | | 7 | <ul><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>강아지 간식>캔/파우치 매핑_카테고리2 : Naverstore > 펫탭 > 브랜드직영관 > 강아지 간식 옵션명 : 상품명 : 동원 뉴트리플랜 홀릭 흰살참치와 야채과일 85g 24개 '</li><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>강아지 간식>캔/파우치 옵션명 : 닭고기+양고기 상품명 : 피어 사각캔 강아지 습식 간식 캔 닭고기 소고기 100g 매핑_카테고리2 : Naverstore > 반려동물용품 > 강아지용품 > 간식 > 캔/파우치 '</li><li>'매핑_카테고리2 : Naverstore > 반려동물용품 > 강아지용품 > 간식 > 캔/파우치 매핑_카테고리1 : (#M)홈>캔ㆍ파우치>시저 상품명 : 시저 심플리 크래프티드37g 시저캔 옵션명 : 심플리 닭고구마사과보리37g '</li></ul> | | 45 | <ul><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>고양이 건강/관리용품>구강관리용품 옵션명 : 상품명 : 쉽고편한 반려동물 알약 물약 투여기 필건 주사기 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 건강/관리용품 > 구강관리 '</li><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>고양이 건강/관리용품>구강관리용품 옵션명 : 상품명 : [CS-160] 챠오 덴탈케어 츄르치약 - 참치 매핑_카테고리2 : MP > Naverstore > inabapetfood브랜드스토어 > 전체상품 '</li><li>'상품명 : 젤 치약 강아지 고양이 자이목스 오라틴 투스페이스트 70g 옵션명 : '</li></ul> | | 44 | <ul><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>고양이 건강/관리용품>고양이유산균 매핑_카테고리2 : MP > Naverstore > 반려동물용품 > 고양이용품 > 건강/관리용품 > 유산균 옵션명 : 일반배송 상품명 : [빅쿠폰] 클리닉스 Pro 5A 프로파이브A 고양이 액상 유산균 15ml 동물병원 정식제품 '</li><li>'옵션명 : 매핑_카테고리2 : MP > Naverstore > 반려동물용품 > 고양이용품 > 건강/관리용품 > 유산균 상품명 : 퓨리나 포티플로라 고양이 유산균 30포 x 6개 FortiFlora 고양이 장건강 매핑_카테고리1 : (#M)생활/건강>반려동물>고양이 건강/관리용품>고양이유산균 '</li><li>'매핑_카테고리2 : MP > Naverstore > 반려동물용품 > 고양이용품 > 건강/관리용품 > 유산균 매핑_카테고리1 : (#M)생활/건강>반려동물>고양이 건강/관리용품>고양이유산균 옵션명 : 상품명 : 벨벳 웰케어 유산균 고양이용 투약보조제 50개 + 10개 '</li></ul> | | 61 | <ul><li>'매핑_카테고리2 : Naverstore > 반려동물용품 > 고양이용품 > 사료 > 수제 옵션명 : 상품명 : 더 그릴드 오븐베이크 소고기+고구마150g 매핑_카테고리1 : 생활/건강>반려동물>고양이 사료>수제사료 '</li><li>'옵션명 : 상품명 : 고급 생연어 테비랑 5kg '</li><li>'옵션명 : 포함 상품명 : 정글키친 고양이 생식 더블민스 치킨 포켓100팩/7.5kg 두배 분쇄한 무스 타입 뼈 없는 닭고기 / 어류 알러지 FREE 영양식 '</li></ul> | | 31 | <ul><li>'상품명 : 애견장난감 옥수수모양 양치 치석제거 옵션명 : '</li><li>'옵션명 : 상품명 : 바스락 코끼리 애견펫토이 애묘펫토이 인형 삑삑이 몸통 매핑_카테고리1 : (#M)생활/건강>반려동물>강아지 장난감/훈련>자동장난감 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 강아지용품 > 장난감/훈련 > 자동장난감 '</li><li>'옵션명 : Puppy Antler_Petite 상품명 : Nylabone 강아지 츄 젠틀 츄잉 앤틀러 대체 치킨 맛 츄토이 쁘띠 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 강아지용품 > 장난감/훈련 > 자동장난감 매핑_카테고리1 : (#M)생활/건강>반려동물>강아지 장난감/훈련>자동장난감 '</li></ul> | | 111 | <ul><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>패션용품>안경/모자 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 패션용품 > 안경/모자 상품명 : 강아지군밤 강아지선글라스 2016 새로운 개 고양이 산타 클로스 고양이모자 강아지썬캡 옵션명 : red_L '</li><li>'상품명 : 강아지모자 군밤 겨울 고양이 귀도리 무스탕 반려견 대형견 펫 방울 산책용품 베이지 S-M 옵션명 : 베이지_L-XL '</li><li>'매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 패션용품 > 안경/모자 매핑_카테고리1 : (#M)생활/건강>반려동물>패션용품>안경/모자 옵션명 : 컬러 상품명 : 키밍 애견 선글라스 고양이 촬영 냥글라스 안경 '</li></ul> | | 101 | <ul><li>'옵션명 : L_챠콜그레이 매핑_카테고리1 : (#M)생활/건강>반려동물>이동장/외출용품>카시트 상품명 : 멍뭉스 더블쿠션 강아지카시트 M 반려견 애견카시트 펫 차시트 조수석 뒷자리 차량용 소형견 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 강아지용품 > 이동/산책용품 > 카시트 '</li><li>'매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 강아지용품 > 이동/산책용품 > 카시트 상품명 : 차 강아지 콘솔 고양이차 애완 카시트 박스펫 DD- 애견 12434 옵션명 : 매핑_카테고리1 : (#M)생활/건강>반려동물>이동장/외출용품>카시트 '</li><li>'상품명 : 애견 강아지 카시트 뒷좌석 방수 풀커버 옵션명 : '</li></ul> | | 83 | <ul><li>'상품명 : 온도측정 강아지 욕조 스파 목욕탕 샤워걸이 목욕 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 미용/목욕 > 샤워기/욕조 매핑_카테고리1 : (#M)생활/건강>반려동물>미용/목욕>샤워기/욕조 옵션명 : 강아지 욕조 '</li><li>'옵션명 : 2. 파랑 상품명 : [바썸펫] 스마트 온도측정 접이식 강아지 욕조 목욕탕 스파 애견 반려견 고양이 '</li><li>'상품명 : 강아지온천 스파 욕조 고양이 입욕제 매핑_카테고리1 : (#M)생활/건강>반려동물>미용/목욕>샤워기/욕조 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 미용/목욕 > 샤워기/욕조 옵션명 : 레몬 옐로우(낮은 배스 배럴) '</li></ul> | | 51 | <ul><li>'상품명 : 고양이 모래삽 국내산 옵션명 : 와인 '</li><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>고양이 배변용품>분변통/모래삽 옵션명 : 7mm-두부모래용 상품명 : 고양이 화장실 모래삽 두부 모래 벤토나이트 메탈 대형 똥삽 주걱 스쿱 화장실삽 7mm 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 배변용품 > 분변통/모래삽 '</li><li>'매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 배변용품 > 분변통/모래삽 매핑_카테고리1 : (#M)생활/건강>반려동물>고양이 배변용품>분변통/모래삽 옵션명 : 상품명 : 가는 모래용 모래삽 - 레몬 고양이화장실청소 '</li></ul> | | 25 | <ul><li>'옵션명 : 닥터할리 펫밀크 홍삼 200ml X 10개 매핑_카테고리2 : MP > naver_plus_traverse_extension > Naverstore > 반려동물용품 > 강아지 사료 > 분유/우유 상품명 : 닥터할리 펫밀크 카라멜 200ml X 10개 매핑_카테고리1 : (#M)생활/건강>반려동물>강아지 사료>분유/우유 '</li><li>'상품명 : 비어파 락톨(분유) 퍼피 500g 어린 강아지 분유사료 옵션명 : 매핑_카테고리2 : MP > Naverstore > 반려동물용품 > 강아지용품 > 사료 > 분유/펫밀크 매핑_카테고리1 : (#M)생활/건강>반려동물>강아지 사료>분유/우유 '</li><li>'매핑_카테고리1 : 생활/건강>반려동물>강아지 사료>분유/우유;(#M)생활/건강>반려동물>강아지 간식>음료 상품명 : 뉴트리플랜 펫밀크 반려견전용 55ml 옵션명 : _지점명:aboutpet 매핑_카테고리2 : Naverstore > 펫탭 > 어바웃펫 > 강아지 사료 '</li></ul> | | 73 | <ul><li>'매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 리빙용품 > 울타리 상품명 : 고양이 중형 철장 캣 팬스 집 하우스 케이지 옵션명 : 매핑_카테고리1 : (#M)생활/건강>반려동물>리빙용품>울타리 '</li><li>'상품명 : 반려동물 애견목보호대 강아지목쿠션 고양이 목카라 매핑_카테고리1 : (#M)생활/건강>반려동물>리빙용품>울타리 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 리빙용품 > 넥카라 옵션명 : '</li><li>'상품명 : 프리미엄 라온 울타리 노랑 8P 반려동물 리빙용품 옵션명 : '</li></ul> | | 78 | <ul><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>미용/목욕>드라이기/드라이룸 상품명 : 고양이털말리기 바람펫 스마트 건조상자 집 목욕 매핑_카테고리2 : T200 > Naverstore > 가전 > 펫가전 > 드라이룸 옵션명 : B타입 중형 48x48x50 캐티 약 16개 '</li><li>'옵션명 : 매핑_카테고리2 : T200 > Naverstore > 가전 > 펫가전 > 드라이룸 상품명 : 캐치웰 강아지 고양이 드라이기 반려동물 털 건조기 펫드라이룸 매핑_카테고리1 : (#M)생활/건강>반려동물>미용/목욕>드라이기/드라이룸 '</li><li>'옵션명 : 매핑_카테고리1 : (#M)생활/건강>반려동물>미용/목욕>드라이기/드라이룸 상품명 : [APST-2041 에이플러스 스탠드 강아지 드라이기 핑크 BLDC 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 펫가전 '</li></ul> | | 52 | <ul><li>'옵션명 : 상품명 : 호랑이 모래 무향 6kg '</li><li>'매핑_카테고리1 : (#M)홈>생활/건강>반려동물>고양이 사료>건식사료 옵션명 : 매핑_카테고리2 : Naverstore > 반려동물용품 > 고양이용품 > 사료 > 건식 상품명 : [LG유니참] 데오토일렛 사막화 방지 소취 항균 고양이 화장실 모래 2L x 8개 '</li><li>'옵션명 : 미스터킴 대형 고양이 화장실 후드 서랍형 분리 매핑_카테고리2 : Naverstore > 반려동물용품 > 고양이용품 > 간식 > 동결건조 상품명 : 미스터킴 특대형 고양이 화장실 2도어 사막화방지 후드형 변기 배변통 모래삽포함 항균필터 미스터킴(반려동물) 매핑_카테고리1 : (#M)홈>생활/건강>반려동물>고양이 간식>동결건조 간식 '</li></ul> | | 24 | <ul><li>'상품명 : 프론티어 사료 비프 소고기 동결건조사료 300g 매핑_카테고리2 : Naverstore > 반려동물용품 > 강아지용품 > 사료 > 동결건조 옵션명 : 사은품2번(네츄럴EX간식2개)_비프(소)300g 매핑_카테고리1 : (#M)생활/건강>반려동물>강아지 사료>동결건조 사료 '</li><li>'매핑_카테고리2 : Naverstore > 반려동물용품 > 강아지용품 > 사료 > 건식 상품명 : 로얄캐닌 독 모빌리티 7kg 매핑_카테고리1 : (#M)홈>강아지처방식>건식사료 옵션명 : '</li><li>'상품명 : 스텔라앤츄이스 츄이스 치킨 밀믹서 8oz(226g)- 10월 셋째주 이후 순차 출고 옵션명 : '</li></ul> | | 41 | <ul><li>'상품명 : 고양이 캣닢 사탕 캔디 볼 음수량 증가 헤어볼 감소 스트레스해소 캣잎 장난감 골골 공 옵션명 : 핑크 '</li><li>'상품명 : 모리네 고양이 캣그라스 무농약 고양이풀 수경재배 키트 매핑_카테고리2 : MP > Naverstore > 반려동물용품 > 고양이용품 > 간식 > 캣닢/캣그라스 매핑_카테고리1 : (#M)생활/건강>반려동물>고양이 간식>캣닢/캣그라스 옵션명 : 병포장 무농약 햇우리밀씨앗 250g '</li><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>고양이 간식>캣닢/캣그라스 매핑_카테고리2 : Naverstore > 펫탭 > 브랜드직영관 > 고양이 간식 옵션명 : 상품명 : 펫모닝 공굴리는 고양이 캣닢볼 '</li></ul> | | 70 | <ul><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>리빙용품>넥카라 상품명 : 오가닉 넥카라 깔대기 강아지 고양이 핥는 긁는 습관 개선 피부보호 옵션명 : S 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 강아지용품 > 리빙용품 > 넥카라 '</li><li>'상품명 : 아기 고양이 강아지 중성화 초경량 천 넥카라 깔때기 XXS 옵션명 : XS_민트 '</li><li>'상품명 : 더플래 반반 초경량 강아지 고양이 넥카라 S 매핑_카테고리1 : (#M)생활/건강>반려동물>리빙용품>넥카라 옵션명 : M_퍼플/옐로 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 리빙용품 > 넥카라 '</li></ul> | | 55 | <ul><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>고양이 배변용품>평판형화장실 옵션명 : 상품명 : 지올플라스트 맥스 평판화장실(Max 레드) 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 배변용품 > 평판형화장실 '</li><li>'매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 배변용품 > 후드형화장실 매핑_카테고리1 : (#M)생활/건강>반려동물>고양이 배변용품>평판형화장실 상품명 : 엠펫 초대형 고양이 화장실 CAT-L 16 -블루 애묘 후드 캣화장실 배변 용품 옵션명 : '</li><li>'상품명 : 캣 토일렛 레드 변소 고양이화장실 1p 모래튐방지 냄새억제 분리형 원형 야옹이 매핑_카테고리1 : (#M)생활/건강>반려동물>고양이 배변용품>평판형화장실 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 배변용품 > 평판형화장실 옵션명 : '</li></ul> | | 57 | <ul><li>'상품명 : 포항 목재 펠릿 20kg 국내산 우드 펠렛 옵션명 : 포항 목재펠릿 20kg '</li><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>고양이 배변용품>흡수형모래 상품명 : 에코 펠라인 퓨어 9.08kg 펠릿 고양이 화장실 펠렛모래 흡수형모래 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 배변용품 > 흡수형모래 옵션명 : '</li><li>'옵션명 : 무향 5L x 6개 상품명 : 룽펑 크리스탈 모래 실리카겔 무향 5L x 6개 '</li></ul> | | 60 | <ul><li>'상품명 : 룩트 대용량 초코 클러스터 200g 옵션명 : '</li><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>고양이 사료>분유/우유 상품명 : 에코펫 숨탄우유 펫밀크180mlX10개 강아지 고양이 전용우유 옵션명 : 매핑_카테고리2 : Naverstore > 반려동물용품 > 고양이용품 > 사료 > 분유/펫밀크 '</li><li>'옵션명 : 매핑_카테고리2 : Naverstore > 반려동물용품 > 고양이용품 > 사료 > 분유/우유 매핑_카테고리1 : 생활/건강>반려동물>고양이 사료>분유/우유 상품명 : 파스퇴르 전용목장1급A원유 저지방우유190ml(24팩) '</li></ul> | | 19 | <ul><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>강아지 배변용품>배변유도제 상품명 : 펫퍼스 배변유도제 옵션명 : 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 강아지용품 > 배변용품 > 배변유도제 '</li><li>'매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 강아지용품 > 배변용품 > 배변유도제 상품명 : 유린오프 유린화인더 옵션명 : 매핑_카테고리1 : (#M)생활/건강>반려동물>강아지 배변용품>배변유도제 '</li><li>'매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 강아지용품 > 배변용품 > 배변유도제 옵션명 : 매핑_카테고리1 : (#M)생활/건강>반려동물>강아지 배변용품>배변유도제 상품명 : 브리더 배변유도제 화장실유도제 30ml '</li></ul> | | 94 | <ul><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>식기/급수기>정수기/필터 옵션명 : 01=White_15x15x16cm_1L 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 강아지용품 > 식기/급수기 > 정수기/필터 상품명 : 고양이 자동 음수대 분수 급수기 정수기 순환 반려 물 개 디펜서 동물 음소거 꽃잎 투명 '</li><li>'옵션명 : 스모크그레이2L건열방지1매필터 매핑_카테고리1 : (#M)생활/건강>반려동물>식기/급수기>정수기/필터 상품명 : 자동급수기 싱글족 반려동물 고양이 저소음 애견 반자동 스마트 무음정수기 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 강아지용품 > 식기/급수기 > 정수기/필터 '</li><li>'옵션명 : 특별혜택 매핑_카테고리1 : (#M)생활/건강>반려동물>식기/급수기>정수기/필터 상품명 : 고양이 분수대 정화 깨끗한 반려동물 공급 식수 순환 스마트 물 강아지 워터 자동 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 식기/급수기 > 정수기/필터 '</li></ul> | | 108 | <ul><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>패션용품>수영복/구명조끼 상품명 : 강아지옷 고양이 후르츠 멍키니 애견 수영복 물놀이 반려동물 비키니 옵션명 : 블루_S 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 계절용품 > 수영복/구명조끼 '</li><li>'상품명 : 소형견 애견의류 상어 수영 인어 보트 안전 재킷 수영복 신축성좋은강아지옷 강아지나시 옵션명 : 파란_XS '</li><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>패션용품>수영복/구명조끼 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 계절용품 > 수영복/구명조끼 옵션명 : 옐로우_M 상품명 : [오름OREUM] 강아지 수영복 자외선 차단 병아리 나시 (S / 2XL) '</li></ul> | | 119 | <ul><li>'옵션명 : S 상품명 : 강아지바지 청바지 데님 비숑 푸들 포메 말티 M인디고 S-XL '</li><li>'매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 패션용품 > 팬츠/스커트 매핑_카테고리1 : (#M)생활/건강>반려동물>패션용품>팬츠/스커트 옵션명 : 옐로우 플라워 견인 가능_L(권장체중 8-11근) 상품명 : 강아지 겨울 원피스 옷 애견 2xl 공주 반려견 드레스 고양이 소형견 의류 웨딩 맨투맨 '</li><li>'상품명 : 강아지 멜빵 서스펜더 데이지 멜빵치마 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 강아지용품 > 패션용품 > 팬츠/스커트 매핑_카테고리1 : (#M)생활/건강>반려동물>패션용품>팬츠/스커트 옵션명 : 라이트블루_M '</li></ul> | | 8 | <ul><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>강아지 간식>빵/케이크 매핑_카테고리2 : Naverstore > 반려동물용품 > 강아지용품 > 간식 > 빵/케이크 상품명 : 강아지자연식 고기파운드 2개,5개,10개세트 고양이 노령견 화식 촉촉한 펫베이커리 나니스펫푸드 옵션명 : 닭안심2개(60g+60g) '</li><li>'옵션명 : 상품명 : 반려견소세지 저지방 애견 소시지 30개입 1p 소고기 야채 영양 '</li><li>'옵션명 : 매핑_카테고리2 : Naverstore > 반려동물용품 > 강아지용품 > 간식 > 수제간식 상품명 : 강아지생식 강아지간식 만들기 닭가슴살 1kg 매핑_카테고리1 : (#M)생활/건강>반려동물>강아지 간식>수제간식 '</li></ul> | | 105 | <ul><li>'매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 계절용품 > 우산/우비 옵션명 : 이토_S (목둘레 20-26cm) 매핑_카테고리1 : (#M)생활/건강>반려동물>패션용품>레인코트 상품명 : 얼룩 반려동물 페이셜 의상 이발 케어 방수 고양이 비옷 망토 '</li><li>'상품명 : 소형견우비 초대형견용 애완 동물 야외 빠른 건조 후드 노란색 강아지비옷 대형견우비 옵션명 : YELLOW_L '</li><li>'옵션명 : S_브라운 상품명 : 강아지 배가리개 댕댕이산책복 애견레인코트 '</li></ul> | | 21 | <ul><li>'매핑_카테고리2 : MP > Naverstore > 반려동물용품 > 정기구독 매핑_카테고리1 : (#M)생활/건강>반려동물>강아지 배변용품>배변패드 옵션명 : 100매 (50x40cm) 3개 상품명 : 슬기로운 패드 절약형 100매 (50x40cm) '</li><li>'매핑_카테고리2 : MP > Naverstore > 반려동물용품 > 정기구독 > 강아지 > 배변패드 매핑_카테고리1 : (#M)생활/건강>반려동물>강아지 배변용품>배변패드 상품명 : 냄새잡는 참숯 강아지 패드 20g 300매 애견 배변패드 블랙 옵션명 : 차콜중대형80g(3팩 총63매) '</li><li>'옵션명 : 상품명 : [정기구독가능] 바잇미 보솜패드 라이트 배변패드 - 대형 40매 (기존 용량 두 배/가벼운 무게) '</li></ul> | | 10 | <ul><li>'상품명 : [에스틴] 강아지 유산균 독 리얼비피더스(STN Dog Real BIFIDUS) 60포 매핑_카테고리1 : (#M)생활/건강>반려동물>강아지 건강/관리용품>강아지유산균 매핑_카테고리2 : MP > Naverstore > 반려동물용품 > 강아지용품 > 건강/관리용품 > 유산균 옵션명 : 60포(x1박스)_사은품3 '</li><li>'옵션명 : ②장건강&관절 특허 유산균 10P [체험팩] 상품명 : 종근당 라비벳 강아지 고양이 유산균 샘플 장건강 피부 10P 체험팩 매핑_카테고리1 : (#M)생활/건강>반려동물>강아지 건강/관리용품>강아지유산균 매핑_카테고리2 : MP > Naverstore > 반려동물용품 > 강아지용품 > 건강/관리용품 > 유산균 '</li><li>'상품명 : 웰케어 강아지 유산균 약효보조제 짜먹는 생유산균 50p 매핑_카테고리2 : MP > Naverstore > 반려동물용품 > 강아지용품 > 건강/관리용품 > 유산균 매핑_카테고리1 : (#M)생활/건강>반려동물>강아지 건강/관리용품>강아지유산균 옵션명 : '</li></ul> | | 6 | <ul><li>'옵션명 : 상품명 : 건강한펫 동결건조 리코타 치즈 플레인 110g '</li><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>강아지 사료>분유/우유 옵션명 : ※신상 닥터할리 산양유 유산균 180ml 매핑_카테고리2 : Naverstore > 반려동물용품 > 강아지용품 > 사료 > 분유/펫밀크 상품명 : 에버그로 강아지 고양이 눈 관절 펫밀크 우유 150ml 반려견우유 '</li><li>'매핑_카테고리1 : (#M)홈>간식>더리얼 dog 옵션명 : 상품명 : 더리얼 저키 닭가슴살 20g 매핑_카테고리2 : Naverstore > harimpetfood브랜드스토어 > 간식 > 더리얼 dog '</li></ul> | | 91 | <ul><li>'매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 식기/급수기 > 사료통/사료스푼 매핑_카테고리1 : (#M)생활/건강>반려동물>식기/급수기>사료통/사료스푼 상품명 : 모데르나 펫위즈덤 트렌드스토리(사료보관함 6L) 옵션명 : '</li><li>'옵션명 : 상품명 : 트랜드스토리(사료보관함 20L) 펫위즈덤 모데르나 매핑_카테고리1 : (#M)생활/건강>반려동물>식기/급수기>사료통/사료스푼 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 식기/급수기 > 사료통/사료스푼 '</li><li>'옵션명 : 상품명 : 스타일도기 애견 사료 계량 스푼 강아지 사료 저울 스푼 '</li></ul> | | 114 | <ul><li>'매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 패션용품 > 재킷/점퍼 매핑_카테고리1 : (#M)생활/건강>반려동물>패션용품>재킷/점퍼 옵션명 : SM 상품명 : 앤블랭크 뚱이 강아지 스냅 풀오버 '</li><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>패션용품>재킷/점퍼 옵션명 : 야구점퍼(네이비)+티셔츠(아이보리)_S+S 상품명 : [세트할인] 디즈니 픽사 몬스터 대학교 티셔츠+야구점퍼 세트 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 패션용품 > 재킷/점퍼 '</li><li>'옵션명 : 오렌지 S 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 강아지용품 > 패션용품 > 재킷/점퍼 상품명 : 강아지패딩 강이지겨울옷 오렌지 매핑_카테고리1 : (#M)생활/건강>반려동물>패션용품>재킷/점퍼 '</li></ul> | | 15 | <ul><li>'옵션명 : 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 강아지용품 > 건강/관리용품 > 구강관리 매핑_카테고리1 : (#M)생활/건강>반려동물>강아지 건강/관리용품>치약 상품명 : 시그니처바이 스토모액트 베오 강아지 고양이 유산균 치약 '</li><li>'상품명 : 버박 CET 치약 70g 강아지 고양이 치약 치석 입냄새 제거 닭고기맛 옵션명 : 1.닭고기맛 '</li><li>'옵션명 : 상품명 : (강아지, 고양이 겸용) 시그니처바이 스토모액트 베오 90g 바르는 유산균 효소 치약 '</li></ul> | | 89 | <ul><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>서비스>파티용품 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 파티/장례서비스 > 파티용품 옵션명 : 파산 소녀 옷 (스커트 + 앞치마)_XL 상품명 : broke girls cos 의상 코스프레 의상 '</li><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>서비스>파티용품 상품명 : 가짜 수염 특수분장 파티소품 사극 코스프레 매핑_카테고리2 : T200 > traverse > Naverstore > 취미/문구/악기 > 수집품 > 코스튬플레이 옵션명 : E. A형 검은수염+접착제 제거제 '</li><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>서비스>파티용품 상품명 : [에썸] 4p 블링팝 파티용 야광 스켈레톤 장갑 스켈레톤장갑 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 파티/장례서비스 > 파티용품 옵션명 : '</li></ul> | | 27 | <ul><li>'상품명 : [듀먼] 닭가슴살&초록입홍합 튼튼관절 화식사료 16팩 50g/100g 옵션명 : 튼튼관절50g 16팩 ao07 '</li><li>'매핑_카테고리2 : MP > Naverstore > 반려동물용품 > 강아지용품 > 사료 > 습식 옵션명 : 상품명 : 강아지 화식 오리 연어 120g 반려견 자연식 사료 매핑_카테고리1 : (#M)생활/건강>반려동물>강아지 사료>습식사료 '</li><li>'상품명 : 페노비스 강아지 화식 자연식 연어황태참치 피부 장 80g x 11팩 one option 매핑_카테고리1 : (#M)생활/건강>반려동물>강아지 사료>건식사료 매핑_카테고리2 : MP > Naverstore > 반려동물용품 > 강아지용품 > 사료 > 화식 옵션명 : '</li></ul> | | 46 | <ul><li>'옵션명 : 상품명 : 강아지 개 쿠션형 푹신한 넥카라 중형견 목보호 '</li><li>'옵션명 : 상품명 : 고양이 귀세정 귀닦기 반려묘 귀로션 바르는 애완동물 '</li><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>고양이 건강/관리용품>눈/귀 관리용품 옵션명 : 상품명 : 무게감있는 심플 고양이 도자기 높은식기 소라색 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 건강/관리용품 > 눈/귀 관리 '</li></ul> | | 122 | <ul><li>'상품명 : 코스튬 양털 모자 토끼모자 PET 강아지 요즘인기 쇼핑추천 고양이 매핑_카테고리1 : (#M)생활/건강>반려동물>패션용품>헤어핀/주얼리 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 패션용품 > 안경/모자 옵션명 : 펫모자토끼 '</li><li>'옵션명 : 02 Rose Red_01 S 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 패션용품 > 헤어핀/주얼리 상품명 : 강아지 및 애완동물 의류, 아기 옷걸이, 소형 대형 개 액세서리, 고양이 제품, 5 매핑_카테고리1 : (#M)생활/건강>반려동물>패션용품>헤어핀/주얼리 '</li><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>패션용품>헤어핀/주얼리 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 패션용품 > 헤어핀/주얼리 상품명 : 애견 빗살 똑딱핀 도트 리본 동물똑딱핀 강아지머리삔 옵션명 : '</li></ul> | | 4 | <ul><li>'상품명 : 통뼈 소창말이 1p 대형견간식 강아지수제간식 매핑_카테고리2 : Naverstore > 펫탭 > 브랜드직영관 > 강아지 간식 매핑_카테고리1 : (#M)생활/건강>반려동물>강아지 간식>수제간식 옵션명 : '</li><li>'매핑_카테고리2 : MP > naver_plus_traverse > Naverstore > 반려동물용품 > 강아지 간식 > 수제간식 옵션명 : 매핑_카테고리1 : (#M)생활/건강>반려동물>강아지 간식>수제간식 상품명 : 포포네 에그미트볼 5종 (30g x 5pcs SET) '</li><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>강아지 간식>수제간식 옵션명 : 매핑_카테고리2 : Naverstore > 펫탭 > 브랜드직영관 > 강아지 간식 상품명 : 옛날통덕(오리장각) 3p '</li></ul> | | 23 | <ul><li>'옵션명 : ANF 6FREE 오리&연어 5.6kg 상품명 : ANF 6FREE 식스프리 사료 애견 사료 5.6kg '</li><li>'상품명 : 이즈칸 강아지 사료 그레인프리 주니어 7kg (스몰) 옵션명 : 매핑_카테고리2 : Naverstore > irion-mall브랜드스토어 > 🧪기능별사료 > ★강아지★ 매핑_카테고리1 : (#M)홈>🐶댕댕이>★강아지사료★ '</li><li>'상품명 : 뉴스카이 성견 15kg 강아지 사료 건식 사료 반려동물 옵션명 : 매핑_카테고리1 : (#M)홈>생활/건강>반려동물>강아지 사료>건식사료 매핑_카테고리2 : Naverstore > 반려동물용품 > 강아지용품 > 사료 > 건식 '</li></ul> | | 104 | <ul><li>'옵션명 : coffee_S 상품명 : 푸들옷 중형견옷 개 옷 꽃 애완 동물 점퍼 스웨터 편안한 고양이 강아지커플룩 '</li><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>패션용품>니트/스웨터 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 강아지용품 > 패션용품 > 니트/스웨터 상품명 : 엔돌펫 데일리 강아지 골지 스판 터틀넥 목폴라 머스타드 옵션명 : 머스타드_S '</li><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>패션용품>니트/스웨터 상품명 : 강아지커플룩 강아지니트 귀여운 따뜻한 가을 치와와 작은 옷 고양이 푸들옷 강아지스웨터 옵션명 : S 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 패션용품 > 니트/스웨터 '</li></ul> | | 109 | <ul><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>패션용품>스카프/목도리/케이프 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 강아지용품 > 패션용품 > 스카프/목도리/케이프 상품명 : 강아지 페이즐리 스카프 1P 반려견 머플러 반다나 옵션명 : 레드 '</li><li>'매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 패션용품 > 코스튬 상품명 : 크리스마스 고양이 옷 강아지 트리 루돌프 케이프 망토 매핑_카테고리1 : (#M)생활/건강>반려동물>패션용품>스카프/목도리/케이프 옵션명 : 브라운_L '</li><li>'상품명 : 강아지 체크 목도리 / 머플러 케이프 매핑_카테고리1 : (#M)생활/건강>반려동물>패션용품>스카프/목도리/케이프 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 계절용품 옵션명 : 브라운_S '</li></ul> | | 38 | <ul><li>'상품명 : 고양이 영양효모 무염황태 건조간식 트릿 20g 매핑_카테고리2 : Naverstore > 펫탭 > 브랜드직영관 > 고양이 간식 매핑_카테고리1 : (#M)생활/건강>반려동물>고양이 간식>육포/건조간식 옵션명 : '</li><li>'상품명 : 더캣츠 미니리얼 순오리살70g 옵션명 : '</li><li>'옵션명 : 상품명 : [몰리스] 오리&대구 슬라이스 '</li></ul> | | 79 | <ul><li>'옵션명 : 펫크린 손발똥꼬 크린 30매 상품명 : 펫크린 손발똥꼬 크린 30매 매핑_카테고리1 : (#M)생활/건강>반려동물>미용/목욕>물티슈/크리너 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 미용/목욕 > 물티슈/크리너 '</li><li>'옵션명 : 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 미용/목욕 > 물티슈/크리너 매핑_카테고리1 : (#M)생활/건강>반려동물>미용/목욕>물티슈/크리너 상품명 : 눈꼽 눈세정제 x5 고양이케어 애견 39 '</li><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>미용/목욕>물티슈/크리너 옵션명 : 상품명 : 펫모닝 펫둥이 올바디 펫티슈 30매 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 미용/목욕 > 물티슈/크리너 '</li></ul> | | 82 | <ul><li>'옵션명 : 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 미용/목욕 > 브러시/빗 매핑_카테고리1 : (#M)생활/건강>반려동물>미용/목욕>브러시/빗 상품명 : 강아지 빗 고양이 브러쉬 엉킨털 진돗개 대형견 돈모 털갈이 죽은털 마사지 펫콤 '</li><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>미용/목욕>브러시/빗 옵션명 : 상품명 : 강아지 안면빗 색상랜덤 2P 고양이 눈꼽 위생 브러쉬 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 미용/목욕 > 브러시/빗 '</li><li>'옵션명 : 상품명 : 애견 슬리커 브러쉬 분홍 1P 강아지 고양이 털관리 빗 '</li></ul> | | 1 | <ul><li>'상품명 : 펫프리카 국내산 동결건조 메가 치킨 트릿 대용량 320g 고양이 강아지 간식 매핑_카테고리1 : (#M)생활/건강>반려동물>강아지 간식>동결건조 간식 옵션명 : 매핑_카테고리2 : Naverstore > 펫탭 > 브랜드직영관 > 강아지 간식 '</li><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>강아지 간식>동결건조 간식 매핑_카테고리2 : MP > Naverstore > 반려동물용품 > 강아지용품 > 간식 > 동결건조 옵션명 : 상품명 : [할로윈이벤트] 오플 젠틀크런치 2종세트 북어100g + 오리100g '</li><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>강아지 간식>동결건조 간식 옵션명 : 01_활기팡팡 북어트릿 1팩_cp01_04_똑똑하댕 노른자트릿 80g 1팩_cr01_04_똑똑하댕 노른자트릿 80g 1팩_cr01 매핑_카테고리2 : MP > Naverstore > 펫탭 > 브랜드직영관 > 강아지 간식 상품명 : 듀먼 동결건조 노른자트릿 3팩 세트 '</li></ul> | | 59 | <ul><li>'상품명 : (코스트코상품) 슈퍼포우 동결건조 트릿 닭가슴살 100g x 3 매핑_카테고리2 : MP > traverse > Naverstore > 반려동물용품 > 고양이용품 > 사료 > 동결건조 옵션명 : 매핑_카테고리1 : (#M)생활/건강>반려동물>고양이 사료>동결건조 사료 '</li><li>'상품명 : 귀여운 작은 노란색 치킨 펜던트 플러시 장난감 인형 그물 붉은 병아리 미니 가방 펜던트 키 체인 인형 옵션명 : white hat 매핑_카테고리1 : 생활/건강>반려동물>고양이 사료>동결건조 사료 매핑_카테고리2 : Naverstore > 반려동물용품 > 고양이용품 > 사료 > 동결건조 '</li><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>고양이 사료>동결건조 사료 옵션명 : 매핑_카테고리2 : MP > traverse > Naverstore > 반려동물용품 > 고양이용품 > 사료 > 동결건조 상품명 : 테비 짜라짜라(10gX50개) 새우와치킨맛 '</li></ul> | | 71 | <ul><li>'상품명 : [파미야] 셀프시공 강아지 롤매트 50x110x0.35cm 미끄럼방지 논슬립 슬개골탈구 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 리빙용품 > 매트 옵션명 : 50X110cm_9T 매핑_카테고리1 : (#M)생활/건강>반려동물>리빙용품>매트 '</li><li>'옵션명 : 그레이 매핑_카테고리1 : (#M)생활/건강>반려동물>리빙용품>매트 상품명 : 네이처펫 논슬립 실리콘 배변매트 표준형 60X50cm 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 강아지용품 > 배변용품 > 배변판 '</li><li>'상품명 : 딩동펫 애견 미끄럼방지매트 방수 대리석 폴더 1단 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 리빙용품 > 매트 매핑_카테고리1 : (#M)생활/건강>반려동물>리빙용품>매트 옵션명 : 그레이헤링본_대형_2400-1400 '</li></ul> | | 97 | <ul><li>'상품명 : 강아지 고양이 비즈목걸이 진주목걸이 인식표 만들기 이니셜 구슬 비즈 공예 재료 옵션명 : A04 유광캔디볼8mm-진핑크25g 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 강아지용품 > 이동/산책용품 매핑_카테고리1 : (#M)생활/건강>반려동물>야외용품>목걸이/인식표 '</li><li>'상품명 : 강아지야광네임택 인식표 이름표 명찰 산책 야광목걸이 매핑_카테고리1 : (#M)생활/건강>반려동물>야외용품>목걸이/인식표 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 강아지용품 > 이동/산책용품 > 목걸이/인식표 옵션명 : 작은사이즈_인디핑크 '</li><li>'옵션명 : 상품명 : 큐빅 강아지모양 펜던트 백금체인목걸이 (12개-1판) '</li></ul> | | 80 | <ul><li>'옵션명 : 매핑_카테고리1 : (#M)생활/건강>반려동물>미용/목욕>미용가위 상품명 : 강아지 프리미엄 장가위 1P 고양이 미용 일자가위 반려견 셀프 애견 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 미용/목욕 > 미용가위 '</li><li>'옵션명 : 상품명 : 모모 강아지 일자가위 반려동물 애견 셀프 미용가위 '</li><li>'옵션명 : 매핑_카테고리1 : (#M)생활/건강>반려동물>미용/목욕>미용가위 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 미용/목욕 > 미용가위 상품명 : 강아지미용가위 강아지 요술가위 1P 고양이 셀프미용 컷팅 일자가위 '</li></ul> | | 35 | <ul><li>'상품명 : 캐츠랑 저요저요 이빨과자 관절튼튼, 60g, 4개 매핑_카테고리1 : (#M)생활/건강>반려동물>고양이 간식>비스킷/스낵 매핑_카테고리2 : MP > traverse > Naverstore > 반려동물용품 > 고양이용품 > 간식 > 비스킷/스낵 옵션명 : '</li><li>'옵션명 : [25.04.01] 연어맛 51g 매핑_카테고리1 : (#M)생활/건강>반려동물>고양이 간식>비스킷/스낵 매핑_카테고리2 : MP > naver_plus_traverse_extension > Naverstore > 반려동물용품 > 고양이 간식 > 비스킷/스낵 상품명 : [임박할인] 퓨리나 덴탈라이프 연어맛 51g '</li><li>'매핑_카테고리2 : Naverstore > 펫탭 > 브랜드직영관 > 고양이 간식 옵션명 : 매핑_카테고리1 : (#M)생활/건강>반려동물>고양이 간식>비스킷/스낵 상품명 : 프리미요 크런치50g 참치 고양이건식간식 '</li></ul> | | 116 | <ul><li>'매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 패션용품 > 코스튬 옵션명 : 루돌프 의상_S 매핑_카테고리1 : (#M)생활/건강>반려동물>패션용품>코스튬 상품명 : 강아지 루돌프 의상 크리스마스 봉제 엘크 코트 루돌프 반려동물 고양이 코스튬 코스프레 '</li><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>패션용품>코스튬 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 패션용품 > 코스튬 상품명 : 강아지 할로윈 코스튬 옷 애완동물 의상 핫도그 모양 닥스훈트 소시지 조절식 고양이 워머 옵션명 : Hamburger_XXS '</li><li>'상품명 : 애완동물 거미 의상 할로윈 동물 옷 고양이 애완 코스프레 드레스 드레싱 옵션명 : 01=AsShown2 '</li></ul> | | 121 | <ul><li>'옵션명 : 퍼플_M 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 강아지용품 > 패션용품 > 티셔츠/후드 매핑_카테고리1 : (#M)생활/건강>반려동물>패션용품>한복 상품명 : 스웨이드 골지 터틀넥 강아지옷 '</li><li>'매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 패션용품 > 한복 상품명 : 한복 대형견 누빔 케이프 모자 넥카라 강아지 고양이 애견 목도리 소형견 중형견 매핑_카테고리1 : (#M)생활/건강>반려동물>패션용품>한복 옵션명 : 화이트_2XL '</li><li>'옵션명 : 딸기/XXL 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 강아지용품 > 패션용품 > 플리스 상품명 : 강아지 티셔츠 맨투맨 가을 과일자수 애견 옷 후리스 의류 매핑_카테고리1 : (#M)생활/건강>반려동물>패션용품>한복 '</li></ul> | | 88 | <ul><li>'상품명 : 그레이스톤 강아지 무지개다리 추모 사진 비석 400mm 옵션명 : 카톡으로 전달할게요_디자인변경 '</li><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>서비스>장례용품 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 파티/장례서비스 상품명 : 반려동물 애완동물 강아지 고양이 유골함 장례 화장 옵션명 : 포겟미디엄7.5x7.5cm '</li><li>'매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 파티/장례서비스 매핑_카테고리1 : (#M)생활/건강>반려동물>서비스>장례용품 상품명 : 추모함 세라믹 애완 동물 항아리 햄스터 화장 유골 용기 작은 개 장례 관 매장 유물 기념물 묘소 옵션명 : 01 8.5X8.5CM '</li></ul> | | 118 | <ul><li>'옵션명 : 주황색_S 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 강아지용품 > 계절용품 상품명 : 강아지 반려견 패딩 방한복 옷 조끼 개 얼굴 겨울 애완견 다운 재킷 따뜻한 두꺼운 애완 매핑_카테고리1 : (#M)생활/건강>반려동물>패션용품>패딩 '</li><li>'옵션명 : 03 636Red_01 S 매핑_카테고리1 : (#M)생활/건강>반려동물>패션용품>패딩 상품명 : 강아지패딩 방수 옷 소형 중형견용 반사 애완 동물 재킷 고양이 코트 프렌치 불독 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 패션용품 > 재킷/점퍼 '</li><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>패션용품>패딩 상품명 : 크리스마스의상 반려동물 겨울옷 강아지 고양이 니트 옵션명 : 블랙 체리 XS 등 길이 20cm 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 패션용품 > 니트/스웨터 '</li></ul> | | 3 | <ul><li>'상품명 : [강아지케이크] 강아지 생일케이크 커스텀 입체강아지케이크 매핑_카테고리1 : (#M)생활/건강>반려동물>강아지 간식>빵/케이크 매핑_카테고리2 : Naverstore > 반려동물용품 > 강아지용품 > 간식 > 빵/케이크 옵션명 : 앉아있는 강아지_고구마단호박(저알러지) '</li><li>'옵션명 : 고깔모자+플랭카드 매핑_카테고리1 : (#M)생활/건강>반려동물>강아지 간식>빵/케이크 상품명 : 네이월 소고기스페셜 강아지 수제 생일 케이크 매핑_카테고리2 : Naverstore > 펫탭 > 브랜드직영관 > 강아지 간식 '</li><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>강아지 간식>빵/케이크 매핑_카테고리2 : Naverstore > 반려동물용품 > 강아지용품 > 간식 > 빵/케이크 상품명 : 강아지컵케이크 옵션명 : 저알러지(고구마당근브로콜리)_추가안함(-10000원) '</li></ul> | | 113 | <ul><li>'상품명 : 강아지겨울원피스 밍크 리본 화동 니트원피스 옵션명 : 아이보리_M '</li><li>'옵션명 : 정장_M 가슴둘레 42 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 패션용품 > 원피스/드레스 매핑_카테고리1 : (#M)생활/건강>반려동물>패션용품>원피스/드레스 상품명 : 강아지 정장 드레스 개 신사 정장 결혼식 고양이 옷 '</li><li>'옵션명 : S 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 패션용품 > 원피스/드레스 매핑_카테고리1 : (#M)생활/건강>반려동물>패션용품>원피스/드레스 상품명 : 2pcs 강아지 드레스 인어 고양이 공주 꽃 여름 개 생일 파티 홀리데이 빨강과 노랑 M '</li></ul> | | 26 | <ul><li>'상품명 : 3+1 하루올데이 강아지 덴탈츄 반려견 양치 치석 입냄새 제거 덴탈 껌 치태 연어, 100g, 4개 매핑_카테고리1 : (#M)생활/건강>반려동물>강아지 간식>개껌 매핑_카테고리2 : MP > window_arrival_guarantee > Naverstore > 도착보장 > 반려동물 > 강아지 간식 옵션명 : '</li><li>'상품명 : 포포네 주식 포포밀 (120gX3P SET, 360g) 옵션명 : 양 매핑_카테고리1 : (#M)생활/건강>반려동물>강아지 사료>화식/생식사료 매핑_카테고리2 : MP > traverse > Naverstore > 반려동물용품 > 강아지용품 > 사료 > 화식/생식 '</li><li>'옵션명 : 선택1-오리1.2kg_36.포켄스 디스펜서 m 낱개 5p 매핑_카테고리2 : Naverstore > 반려동물용품 > 강아지용품 > 사료 > 소프트 상품명 : 아스쿠 펠리쿠치나 오리 1.2kg 매핑_카테고리1 : (#M)생활/건강>반려동물>강아지 사료>소프트사료 '</li></ul> | | 110 | <ul><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>패션용품>신발/양말 옵션명 : 상품선택_민트그린 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 패션용품 > 신발/양말 상품명 : 신발 슈즈 고양이 애완용품 샤워 미용 방지 할큄 목욕 '</li><li>'매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 패션용품 > 신발/양말 매핑_카테고리1 : (#M)생활/건강>반려동물>패션용품>신발/양말 옵션명 : 민트그린 상품명 : 실리콘 고양이 발커버 신발 스크래치방지 고양이양말 '</li><li>'옵션명 : 7.5CM_5.블루체크 상품명 : 10+1 멍뭉미 강아지 신발 애견 양말 부츠 염화칼슘 일회용 붕대신발 핑크고양이 5CM '</li></ul> | | 102 | <ul><li>'상품명 : 바잇미 잇백 강아지 이동가방 2컬러 L사이즈 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 강아지용품 > 패션용품 > 가방 매핑_카테고리1 : (#M)생활/건강>반려동물>패션용품>가방 옵션명 : 네이비L_브라운 '</li><li>'매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 강아지용품 > 패션용품 > 가방 옵션명 : 상품선택_확장패널 21cm 상품명 : 21cm 버스 반려동물패션 애견동반식당 병원 산책 간식가방 실내용울타리 안전문 확장패널 매핑_카테고리1 : (#M)생활/건강>반려동물>패션용품>가방 '</li><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>패션용품>가방 상품명 : 애견슬링백 애견백팩 메쉬 애완 동물 고양이 개 캐리어 배낭 위장 야외 애견캔넬 켄넬 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 패션용품 > 가방 옵션명 : 9Mesh_S '</li></ul> | | 39 | <ul><li>'상품명 : 핫케익가루 450G백설 옵션명 : '</li><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>고양이 간식>음료 매핑_카테고리2 : Naverstore > 반려동물용품 > 고양이용품 > 간식 > 음료 상품명 : 하이넛 아몬드슬라이스(대봉 1K)_식품 옵션명 : '</li><li>'옵션명 : 1EA 상품명 : 6개묶음 반려동물 식수물 미네랄 영양수분 워터 500ml '</li></ul> | | 5 | <ul><li>'매핑_카테고리2 : MP > traverse > Naverstore > 반려동물용품 > 강아지용품 > 간식 > 육포/건조 옵션명 : 매핑_카테고리1 : (#M)생활/건강>반려동물>강아지 간식>육포/건조간식 상품명 : 닭고기 350g 대용량 닭고기링 닭고기츄 바이츠링 '</li><li>'옵션명 : 상품명 : 네츄럴코어 네코치킨 3000 '</li><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>강아지 간식>육포/건조간식 옵션명 : 11450 헬로도기 꽉찬 육포 1kg_헬로도기 꽉찬 육포 오리젤리꽈배기 1kg 매핑_카테고리2 : MP > traverse > Naverstore > 반려동물용품 > 강아지용품 > 간식 > 육포/건조 상품명 : 테비 사사미 육포 대용량 치킨꽈배기 1kg '</li></ul> | | 100 | <ul><li>'상품명 : 태연강아지가방 애견 이동장 켄넬 고양이 나혼산 펫캐리어 옵션명 : 블루 '</li><li>'매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 강아지용품 > 이동/산책용품 상품명 : 강아지 대형견 소프트 켄넬(S/M/L/XL) 매핑_카테고리1 : (#M)생활/건강>반려동물>이동장/외출용품>이동장/이동가방 옵션명 : XL '</li><li>'매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 강아지용품 > 이동/산책용품 매핑_카테고리1 : (#M)생활/건강>반려동물>이동장/외출용품>이동장/이동가방 옵션명 : 아이보리/블루_풀패키지 _ M_쿠션 _ 베이지 M 상품명 : 리카리카 리카백 강아지이동가방 애견 캐리어 기내용 '</li></ul> | | 72 | <ul><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>리빙용품>안전문 상품명 : 베란다방묘문 케이지 견문 안전문 펫도어 캣도어 방묘창 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 강아지용품 > 리빙용품 > 안전문 옵션명 : B_참고 후크 유형은 매끄러운 벽에만 적합합니 '</li><li>'옵션명 : 6 cm 높이 60 매핑_카테고리1 : (#M)생활/건강>반려동물>리빙용품>안전문 상품명 : 안전바 펜스 안전울타리 애견 강아지 고양이 유아 낙상 방지 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 리빙용품 > 안전문 '</li><li>'매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 리빙용품 옵션명 : 매핑_카테고리1 : (#M)생활/건강>반려동물>리빙용품>안전문 상품명 : 고퀄 고양이 강아지 출입문 안전문 펫도어 견문 애견 용품 반려동물 '</li></ul> | | 84 | <ul><li>'매핑_카테고리1 : 홈>샴푸&린스;(#M)홈>샘플 신청 매핑_카테고리2 : Naverstore > forcans스마트스토어 > 전체 상품 상품명 : 포켄스 베이비파우더 샴푸 14ml 샘플 강아지 샴푸 옵션명 : '</li><li>'매핑_카테고리1 : (#M)홈>미용&목욕>샴푸 옵션명 : 매핑_카테고리2 : Naverstore > haruwellshop브랜드스토어 > 전체상품 상품명 : 아일오브독스 스탠드업샴푸 강아지 대용량 전문가 살롱 애견샵 3.8L '</li><li>'옵션명 : 상품명 : [바이오강스] 바이오강스 2in1 샴푸 250ml 한병에 목욕이 간편한 샴푸+린스 '</li></ul> | | 9 | <ul><li>'옵션명 : 상품명 : 풀무원 아미오 헬씨믹스 트릿 장 100g '</li><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>강아지 간식>트릿/스틱 상품명 : 슈슈간식타임 닭고기 70g 옵션명 : 매핑_카테고리2 : MP > Naverstore > 반려동물용품 > 강아지용품 > 간식 > 트릿/스틱 '</li><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>강아지 간식>트릿/스틱 옵션명 : 매핑_카테고리2 : MP > Naverstore > 반려동물용품 > 강아지용품 > 간식 > 트릿/스틱 상품명 : 마테차 거름망 티필터 차거름망 '</li></ul> | | 29 | <ul><li>'상품명 : 강아지 눈하트 소리나는 라텍스 공 장난감 토이 X3개 반려견용장난감 애견장난감 강아지 펫장난감 옵션명 : '</li><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>강아지 장난감/훈련>공/원반 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 강아지용품 > 장난감/훈련 옵션명 : 상품명 : 콩 테니스공 장난감 소 '</li><li>'상품명 : 애견 원반 던지기 핑크 놀이용 공 강아지 반려견 장난감 부메랑 매핑_카테고리1 : (#M)생활/건강>반려동물>강아지 장난감/훈련>공/원반 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 강아지용품 > 장난감/훈련 > 공/원반 옵션명 : '</li></ul> | | 76 | <ul><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>리빙용품>하우스 옵션명 : 상품명 : 도넛 고양이 터널 숨숨집(50cm) 캣 하우스 장난감 도너츠 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 장난감 > 터널/주머니 '</li><li>'옵션명 : 07.블랙 3단-140x55x175cm_칸막이 트레이 바퀴 포함 상품명 : 고양이 동물병원 사육 대형 케이지 번식 애견분양장 '</li><li>'상품명 : 라탄 캣하우스 고양이집 강아지집 숨숨집 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 리빙용품 > 하우스 매핑_카테고리1 : (#M)생활/건강>반려동물>리빙용품>하우스 옵션명 : 기본-내추럴우드 '</li></ul> | | 56 | <ul><li>'상품명 : 고양이 화장실 자동 청소 소독 살균 탈취 스마트 버전 옵션명 : 화이트 스마트 버전 '</li><li>'상품명 : 밀폐형 고양이 자동화장실 전자동 스마트화장실 모래 튐 방지 옵션명 : 자동 냥이화장실 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 배변용품 > 자동화장실 매핑_카테고리1 : (#M)생활/건강>반려동물>고양이 배변용품>후드형화장실 '</li><li>'옵션명 : 자동 고양이 화장실 APP 버전(모래 조절 매 상품명 : 고양이 자동 화장실 청소 기능 화장실 '</li></ul> | | 62 | <ul><li>'옵션명 : 참치 100개입+5p 상품명 : 동원 뉴트리스틱 100개입 고양이 대용량 츄르 짜먹는 저염 간식 동원 참치 닭가슴살 '</li><li>'옵션명 : 1kg+더마샘플3개 매핑_카테고리2 : MP > Naverstore > royalcanin스마트스토어 > 전체상품 매핑_카테고리1 : (#M)생활/건강>반려동물>강아지 사료>처방식/기능식사료 상품명 : 로얄캐닌 미니 다이제스티브케어 3kg 건강한장관리 '</li><li>'매핑_카테고리2 : Naverstore > 반려동물용품 > 고양이용품 > 사료 > 습식 옵션명 : 꿩 매핑_카테고리1 : (#M)홈>습식 사료 상품명 : 동동이네 고양이 캔 미아모아 캣 파스테이트 가금류 간 캔 85g 9종 모음 영양 주식캔 '</li></ul> | | 17 | <ul><li>'매핑_카테고리1 : (#M)홈>강아지>배변용품 상품명 : LG유니참 강아지기저귀 매너웨어 여아용 M (34P) (매너밸트) 매핑_카테고리2 : Naverstore > lgcarepet브랜드스토어 > 전체상품 옵션명 : 34매 M '</li><li>'옵션명 : 8.매너웨어 남 SSS 52P 매핑_카테고리2 : Naverstore > lgcarepet브랜드스토어 > 전체상품 상품명 : 하츠 유니참 강아지 기저귀&배변용품&산책용품 모음전 매핑_카테고리1 : (#M)홈>강아지>배변용품 '</li><li>'옵션명 : 6.매너웨어 여 L 32P 매핑_카테고리2 : Naverstore > lgcarepet브랜드스토어 > 전체상품 상품명 : LG유니참 강아지 기저귀&배변용품&산책용품 모음전 매핑_카테고리1 : 홈>전체상품;(#M)홈>강아지>배변용품 '</li></ul> | | 66 | <ul><li>'상품명 : 바스락 캣 터널 1p 고양이놀이터 스트레스해소 숨숨집 매핑_카테고리1 : (#M)생활/건강>반려동물>고양이 장난감>터널/주머니 옵션명 : 오렌지 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 장난감 > 터널/주머니 '</li><li>'옵션명 : 상품명 : 큐브플레이터널 '</li><li>'옵션명 : 중형 상품명 : 소심한호랑이 버터링터널 고양이터널 고양이숨숨집 매핑_카테고리1 : (#M)생활/건강>반려동물>고양이 장난감>터널/주머니 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 장난감 '</li></ul> | | 90 | <ul><li>'옵션명 : White 매핑_카테고리1 : (#M)생활/건강>반려동물>식기/급수기>급수기/물병 상품명 : 자동급수기 강아지자동급수기 L 애완 동물 물 분수 목 모양 고양이물그릇 강아지물통 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 식기/급수기 > 급수기/물병 '</li><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>식기/급수기>급수기/물병 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 식기/급수기 > 급수기/물병 상품명 : 고양이정수기필터 반려동물급수기 듀얼 포트 개 자동 워터 디스펜서 애견정수기 고양이급수기 옵션명 : WHITE '</li><li>'옵션명 : 08 고온 저항 블루 접이식 주전자 550m 매핑_카테고리1 : (#M)생활/건강>반려동물>식기/급수기>급수기/물병 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 강아지용품 > 이동/산책용품 > 물병 상품명 : 강아지산책물통 접이식 물병 반자동 개물통 휴대용 급수기 물컵 17활성탄 '</li></ul> | | 43 | <ul><li>'매핑_카테고리1 : (#M)홈>고양이 간식>템테이션 옵션명 : 상품명 : 템테이션 고소한 참치맛 4개 매핑_카테고리2 : Naverstore > marskorea브랜드스토어 > 전체상품 '</li><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>고양이 간식>통살/소시지 상품명 : 테비 말랑 부드럽닭 20g x 100개 1박스 옵션명 : 매핑_카테고리2 : MP > Naverstore > 반려동물용품 > 고양이용품 > 간식 > 통살/소시지 '</li><li>'매핑_카테고리2 : Naverstore > 반려동물용품 > 고양이용품 > 간식 > 비스킷/스낵 옵션명 : 상품명 : [유통기한2022-08-31] 템테이션 고소한 참치맛 고양이 간식 12묶음 매핑_카테고리1 : (#M)홈>🐱냥냥이🐱>🧁냥냥이 간식 '</li></ul> | | 33 | <ul><li>'매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 파티/장례서비스 > 장례용품 상품명 : 애완 동물을 위한 동물 항아리, 개인화 된 도자기, 재를 작은 개 인간 화장을 장례식 항아리 148705 ㅏ 750ml 옵션명 : 148705 비_750ml 매핑_카테고리1 : (#M)생활/건강>반려동물>강아지 장난감/훈련>훈련용품 '</li><li>'상품명 : 천국 대형견 가죽철망입마개(블랙)3호 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 강아지용품 > 장난감/훈련 > 훈련용품 옵션명 : 매핑_카테고리1 : (#M)생활/건강>반려동물>강아지 장난감/훈련>훈련용품 '</li><li>'상품명 : co/(색상랜덤)강아지 트레이닝용 클리커 편리한 손목걸이 훈련클리커 좋은상품 옵션명 : '</li></ul> | | 14 | <ul><li>'매핑_카테고리1 : 홈>핏펫✨>영양제 베터👍;홈>브랜드>Vetter;홈>브랜드>it 잇츄l잇츄러스l츄잇;홈>브랜드>Vetter 베터;(#M)홈>브랜드>베터 상품명 : [1+1] 베터 밥에 뿌려주는 반려동물 영양제 파우더 90g 7종 눈건강 장건강 피부 관절 노견 퍼피 매핑_카테고리2 : Naverstore > fitpet스마트스토어 > 전체상품 옵션명 : 베터 비오틴 90g_베터 시니어 멀티케어 90g '</li><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>강아지 건강/관리용품>영양제 매핑_카테고리2 : MP > Naverstore > 반려동물용품 > 강아지용품 > 건강/관리용품 > 영양제 상품명 : [당일출고] NHV 레스프 에이드 100ml (호흡장애, 기침과 기관지) 옵션명 : '</li><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>강아지 건강/관리용품>영양제 매핑_카테고리2 : MP > naver_plus_traverse > Naverstore > 반려동물용품 > 퍼피 > 영양제 옵션명 : 상품명 : 닥터 장 건강트릿 유산균 한달 영양제 영양트릿 강아지영양제 240g '</li></ul> | | 64 | <ul><li>'상품명 : 루어캣 빅치즈 스크래쳐 SC-341 옵션명 : '</li><li>'매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 장난감 상품명 : 딸랑방울 우드트랙 고양이 장난감 1P 스크래쳐 사냥 LWS 매핑_카테고리1 : (#M)생활/건강>반려동물>고양이 장난감>스크래쳐 옵션명 : '</li><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>고양이 장난감>스크래쳐 옵션명 : 혼합색상 × 3개 상품명 : 가또 폴카닷 라운지 고양이 스크래쳐 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 장난감 > 스크래쳐 '</li></ul> | | 98 | <ul><li>'매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 강아지용품 > 이동/산책용품 > 목줄 옵션명 : 블랙 상품명 : 희망 애완견 가죽 로프목걸이 - M 대형견 리드줄 매핑_카테고리1 : (#M)생활/건강>반려동물>야외용품>목줄 '</li><li>'상품명 : 중소형견 가죽목줄 (약1.5x40cm) 개목줄 반려견목줄 반려견목줄 펫목줄 펫전용목줄 강아지 애견목줄 옵션명 : 블랙 매핑_카테고리1 : (#M)생활/건강>반려동물>야외용품>목줄 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 강아지용품 > 이동/산책용품 > 목줄 '</li><li>'옵션명 : 상품명 : 애견 러닝 벨트 '</li></ul> | | 58 | <ul><li>'매핑_카테고리2 : Naverstore > 반려동물용품 > 고양이용품 > 사료 > 건식 옵션명 : [00] 소분팩세트_[오리젠] 오리지날 캣 5.4kg 상품명 : 오리젠 피트앤트림 캣 5.4kg (유통기한24년11월) 매핑_카테고리1 : (#M)생활/건강>반려동물>고양이 사료>건식사료 '</li><li>'매핑_카테고리1 : (#M)홈>🐟 고양이 사료 매핑_카테고리2 : Naverstore > 반려동물용품 > 고양이용품 > 사료 > 건식 상품명 : 나인케어 캣 고양이사료 체형 1.2kg 체형관리 다이어트사료 옵션명 : 체형관리_2.4kg_11. 더캣츠 미니푸딩 25g 3개 '</li><li>'상품명 : 피부모질관리 전연령묘용 고양이사료 5kg 반려식품 옵션명 : 1EA 매핑_카테고리1 : (#M)홈>생활/건강>반려동물>고양이 사료>건식사료 매핑_카테고리2 : Naverstore > 반려동물용품 > 고양이용품 > 사료 > 건식 '</li></ul> | | 16 | <ul><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>강아지 건강/관리용품>칫솔 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 강아지용품 > 건강/관리용품 > 구강관리 상품명 : 아인솝세모칫솔 미세모칫솔 강아지 고양이 양치질 치석제거 입냄새제거 3가지컬러 옵션명 : 블루 '</li><li>'옵션명 : 플라고 손가락칫솔 패드 (60매/2개월분) 상품명 : [핏펫] 플라고 손가락칫솔 패드 60매입 강아지 반려동물 구강관리 덴탈 칫솔 매핑_카테고리1 : (#M)생활/건강>반려동물>강아지 건강/관리용품>칫솔 매핑_카테고리2 : Naverstore > fitpet스마트스토어 > 전체상품 '</li><li>'옵션명 : 상품명 : [핏펫] 플라고 칫솔 (일반) 강아지 구강용품 '</li></ul> | | 75 | <ul><li>'상품명 : 극세사 애견 고양이 사각방석 L 쿠션 논슬립 트리나무 옵션명 : '</li><li>'옵션명 : 플라워방석 / 핑크+화이트_S 매핑_카테고리1 : (#M)생활/건강>반려동물>리빙용품>쿠션/방석 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 리빙용품 > 쿠션/방석 상품명 : 누비홈 강아지방석 애견 고양이 쿠션 겨울 유모차 도넛 플라워방석 민트+핑크 S '</li><li>'상품명 : 포시럽 코끼리 프린팅 강아지 담요 극세사 고양이 이불 쿠션 베개 블랭킷 S 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 리빙용품 > 쿠션/방석 매핑_카테고리1 : (#M)생활/건강>반려동물>리빙용품>쿠션/방석 옵션명 : 코끼리 프린팅 극세사 담요_블루 M '</li></ul> | | 112 | <ul><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>패션용품>올인원 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 패션용품 > 올인원 옵션명 : 퍼플_L 상품명 : 강아지 올인원 후리스 겨울 기모 강아지옷 올인원 중형견 BY 스퀘어 '</li><li>'옵션명 : 노랑 S 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 강아지용품 > 패션용품 > 올인원 상품명 : 강아지올인원 강아지 엠보 멜빵 올인원 1P 반려견 봄가을 바지 매핑_카테고리1 : (#M)생활/건강>반려동물>패션용품>올인원 '</li><li>'상품명 : 리본 쭈글이 끈나시 스커트 강아지옷 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 강아지용품 > 패션용품 > 올인원 매핑_카테고리1 : (#M)생활/건강>반려동물>패션용품>올인원 옵션명 : 블랙_M '</li></ul> | | 0 | <ul><li>'상품명 : 닥터바이 브레스 강아지 기관지 영양제 협착증 기침 호흡기 켁켁거림 보조제 옵션명 : '</li><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>강아지 간식>개껌 상품명 : 한입뚝딱 덴탈 오메가3 210g 강아지간식 (유통기한 24.02.23) 옵션명 : 덴탈오메가3 210g x3개_24.02.23 매핑_카테고리2 : Naverstore > 펫탭 > 브랜드직영관 > 강아지 간식 '</li><li>'매핑_카테고리2 : Naverstore > 반려동물용품 > 강아지용품 > 간식 > 육포/건조 매핑_카테고리1 : (#M)홈>전체상품 상품명 : 제트 대용량 강아지 간식 1kg 우유오리껌 오랫동안 먹는 대포장 애견 간식 옵션명 : 💥(파격할인) 포켄스 덴탈껌&영양제_2.덴티페어리 디스펜서 S 584g '</li></ul> | | 54 | <ul><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>고양이 간식>동결건조 간식 옵션명 : 유니참 데오토일렛 탈취제 내추럴그린향 450m 상품명 : 유니참 데오토일렛 탈취제 내추럴그린향 450mlx2개/포타쥬1팩 매핑_카테고리2 : MP > Naverstore > 반려동물용품 > 고양이용품 > 간식 > 동결건조 '</li><li>'상품명 : LG유니참 뿌려쓰는 탈취제 내추럴 그린향 x 2개 매핑_카테고리2 : MP > Naverstore > 반려동물용품 > 고양이용품 > 간식 > 동결건조 매핑_카테고리1 : (#M)생활/건강>반려동물>고양이 간식>동결건조 간식 옵션명 : '</li><li>'상품명 : 퍼피움 탈취제 플로랄향 750ml x 2 옵션명 : 매핑_카테고리2 : Naverstore > 반려동물용품 > 고양이용품 > 사료 > 건식 매핑_카테고리1 : (#M)홈>생활/건강>반려동물>고양이 사료>건식사료 '</li></ul> | | 47 | <ul><li>'상품명 : 캣스푸 10g x 5개입 / 7세이상 노령묘전용 간식 영양제599668 35 옵션명 : 치킨 10g x5개599668 35 '</li><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>고양이 건강/관리용품>영양제 상품명 : 자이목스 토피컬 스프레이 강아지 고양이 연고 0% 하이드로코티손 56ml 옵션명 : 매핑_카테고리2 : MP > Naverstore > 반려동물용품 > 고양이용품 > 건강/관리용품 > 영양제 '</li><li>'상품명 : 쓰담쓰담 고양이 마싯는고양 츄르 기능성 영양제 7개입 매핑_카테고리2 : Naverstore > 반려동물용품 > 고양이용품 > 간식 > 동결건조 매핑_카테고리1 : (#M)홈>생활/건강>반려동물>고양이 간식>동결건조 간식 옵션명 : 헤어볼 장건강 x 1개 '</li></ul> | | 115 | <ul><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>패션용품>커플룩 옵션명 : 그린(원피스+멜빵)_M 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 패션용품 > 커플룩 상품명 : 댕댕이커플룩 강아지크리스마스옷 루돌프옷 강아지코스튬 '</li><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>패션용품>커플룩 상품명 : 강아지커플룩새로운 애완 동물 개 옷 겨울 프랑스 불독 부드러운 양털 작은 고양이 따뜻한 옵션명 : 02 Ducks_03 L for 5.5-8.5 kg 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 패션용품 > 커플룩 '</li><li>'매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 강아지용품 > 패션용품 > 커플룩 매핑_카테고리1 : (#M)생활/건강>반려동물>패션용품>커플룩 상품명 : 페키페키 강아지커플룩 굿나잇베어 견주용 수면바지 남녀공용 90~110 옵션명 : 아이보리_M '</li></ul> | | 92 | <ul><li>'상품명 : 파충류 도자기 밥그릇 물그릇 식기 원형 소형 중형 매핑_카테고리2 : MP > Naverstore > 반려동물용품 > 강아지용품 > 간식 > 개껌 매핑_카테고리1 : (#M)생활/건강>반려동물>강아지 간식>개껌 옵션명 : 원형 화이트 중형 '</li><li>'옵션명 : 매핑_카테고리2 : MP > Naverstore > 반려동물용품 > 강아지용품 > 간식 > 개껌 매핑_카테고리1 : (#M)생활/건강>반려동물>강아지 간식>개껌 상품명 : 슈퍼 강아지 식기 더블 스피어 보울 블랙 M Pet 강아 '</li><li>'상품명 : 파충류 도자기 밥그릇 물그릇 식기 원형 소형 중형 파총류 소동물 먹이 접시 도마뱀 옵션명 : 원형 화이트 중형 '</li></ul> | | 117 | <ul><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>패션용품>티셔츠/후드 상품명 : 반려동물 방울 - 20개-1판 소 애견방울 5색x4개씩 애견 펫 악세서리 목걸이 고양이 강아지 캣 방울 옵션명 : 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 강아지용품 > 패션용품 > 기타액세서리 '</li><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>패션용품>티셔츠/후드 옵션명 : 상품명 : 애견 바니집게 고양이집게핀 펫악세서리 15개-1판 강아지 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 패션용품 > 헤어핀/주얼리 '</li><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>패션용품>티셔츠/후드 상품명 : 중성화복 중성화옷 고양이 수술복 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 계절용품 옵션명 : M_블루 '</li></ul> | | 74 | <ul><li>'매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 리빙용품 > 침대/해먹 상품명 : 고양이 강아지 원목 2층 침대 반려동물 용품 옵션명 : 노란색 스윙 침대 매핑_카테고리1 : (#M)생활/건강>반려동물>리빙용품>침대/해먹 '</li><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>리빙용품>침대/해먹 상품명 : 카무라라 고양이 창문 해먹 윈도우 창틀 침대 쉼터 유리창 창밖 캣해먹 선반 옵션명 : 중형_편안한 그레이 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 리빙용품 > 침대/해먹 '</li><li>'상품명 : 고양이해먹애완동물 고양이 해먹 그네 옵션명 : 01 L '</li></ul> | | 36 | <ul><li>'옵션명 : 상품명 : 더캣츠 미니 베로베로 100p 5종 20p 고양이 먹이 애묘캔간식 캣간식 애견간식 '</li><li>'옵션명 : 상품명 : 해태 오예스 대용량 쿠키 과자앤크림 360g 12봉입 x 10개1박스 싸다몰2835494 '</li><li>'옵션명 : 오리온 초코파이 1170g 30P 6곽 상품명 : 오리온 초코파이 1170g 30P 6곽 2186013 '</li></ul> | | 99 | <ul><li>'상품명 : 유모차 원터치 접이식 별도 가방 경량 고양이와 개 산책 카트 옵션명 : P 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 강아지용품 > 이동/산책용품 > 유모차 매핑_카테고리1 : (#M)생활/건강>반려동물>이동장/외출용품>유모차 '</li><li>'매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 강아지용품 > 이동/산책용품 > 유모차 상품명 : [N포인트 39000점] 피콜로카네 강아지 유모차 카리노2 블랙 매핑_카테고리1 : (#M)생활/건강>반려동물>이동장/외출용품>유모차 옵션명 : 피콜로카네 카리노2 멜렌지그레이 '</li><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>이동장/외출용품>유모차 옵션명 : 실버화이트 상품명 : 피콜로카네 강아지유모차 탄토2 모스그린 (본사발송) 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 강아지용품 > 이동/산책용품 > 유모차 '</li></ul> | | 67 | <ul><li>'매핑_카테고리2 : Naverstore > lgcarepet브랜드스토어 > 전체상품 매핑_카테고리1 : (#M)홈>강아지>장난감 옵션명 : 12.고양이 장난감 슈퍼헌터팩 1+1 상품명 : 하츠 강아지 고양이 장난감 '</li><li>'옵션명 : 5.수퍼헌터팩 1+1 상품명 : 고양이 장난감 하츠 6종 균일가[1+1] 6종 중 택1 (스크래쳐 낚시대/ 스크래쳐 쥐돌이/ 숨숨집 ) '</li><li>'상품명 : 고양이 놀이공 토이 김밥인형 캣닢장난감 반려용품 옵션명 : 1EA 매핑_카테고리1 : (#M)생활/건강>반려동물>고양이 간식>캣닢/캣그라스 매핑_카테고리2 : MP > Naverstore > 반려동물용품 > 고양이용품 > 간식 > 캣닢/캣그라스 '</li></ul> | | 53 | <ul><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>고양이 배변용품>자동화장실 옵션명 : 플러스 + 샌드박스 상품명 : 고양이자동화장실 탈취 대형 자동청소 배변통 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 배변용품 > 자동화장실 '</li><li>'상품명 : 고양이 자동 화장실 캣링크 소독 탈취 자동배변기 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 배변용품 > 자동화장실 옵션명 : 고급 S1 버전 AI 지능형 쓰레기통 매핑_카테고리1 : (#M)생활/건강>반려동물>고양이 배변용품>자동화장실 '</li><li>'매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 배변용품 > 자동화장실 매핑_카테고리1 : (#M)생활/건강>반려동물>고양이 배변용품>자동화장실 상품명 : 다묘가정용 화장실 펫킷 배변 처리기 자동 고양이 스마트 옵션명 : 페멕스 고양이 화장실x탈취기 '</li></ul> | | 12 | <ul><li>'상품명 : 와우 구강 청결티슈 100매 관리용품 반려동물 건강 생활 강아지 옵션명 : 매핑_카테고리1 : (#M)생활/건강>반려동물>강아지 건강/관리용품>구강티슈 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 강아지용품 > 건강/관리용품 > 구강관리 '</li><li>'옵션명 : 매핑_카테고리1 : (#M)생활/건강>반려동물>강아지 건강/관리용품>구강티슈 매핑_카테고리2 : MP > traverse > Naverstore > 반려동물용품 > 강아지용품 > 간식 > 캔/파우치 상품명 : 워터리스 강아지 핑거 양치티슈 20매 '</li><li>'상품명 : 와우 이크린 구강 청결티슈 100매 옵션명 : '</li></ul> | | 22 | <ul><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>강아지 배변용품>탈취제/소독제 옵션명 : 고강탈4L+500ml용기+100ml+깔대기 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 강아지용품 > 배변용품 > 탈취제/소독제 상품명 : 강아지 탈취제 고강탈 4리터 애견 고양이 오줌 냄새제거 소독제 '</li><li>'상품명 : 포르티 은나노탈취제 라벤더향 1000mlx2 매핑_카테고리1 : (#M)생활/건강>반려동물>강아지 배변용품>탈취제/소독제 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 강아지용품 > 배변용품 > 탈취제/소독제 옵션명 : '</li><li>'매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 강아지용품 > 배변용품 > 탈취제/소독제 매핑_카테고리1 : (#M)생활/건강>반려동물>강아지 배변용품>탈취제/소독제 옵션명 : 상품명 : 라벤더향 1000ml 은나노탈취제 포르티 x2 '</li></ul> | | 13 | <ul><li>'옵션명 : 강아지 랜덤 사은품 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 강아지용품 > 건강/관리용품 > 눈/귀 관리 상품명 : Breezy tail (브리지테일) 페토세라 큐아이 300g 매핑_카테고리1 : (#M)생활/건강>반려동물>강아지 건강/관리용품>눈/귀 관리용품 '</li><li>'매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 강아지용품 > 건강/관리용품 > 눈/귀 관리 옵션명 : 매핑_카테고리1 : (#M)생활/건강>반려동물>강아지 건강/관리용품>눈/귀 관리용품 상품명 : 오티클랜스 오티클렌스 귀세정제 120ml 강아지 고양이 '</li><li>'옵션명 : 상품명 : [단독세트] 페토세라 강아지 고양이 눈물세정 대용량 SET '</li></ul> | | 96 | <ul><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>야외용품>리드줄 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 강아지용품 > 이동/산책용품 > 리드줄 옵션명 : 올리브 상품명 : 아띠지기 강아지 핸즈프리 리드줄 멀티 '</li><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>야외용품>리드줄 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 강아지용품 > 이동/산책용품 > 목줄 상품명 : 버클 벨트 강아지 반려동물 애견 목줄 대형견 옵션명 : 색상랜덤 '</li><li>'옵션명 : 화이트 매핑_카테고리1 : (#M)생활/건강>반려동물>야외용품>리드줄 상품명 : [CL01] 오엘라 강아지 LED 충전식 산책용 잠금 리드줄 (최대 2.5m) 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 강아지용품 > 이동/산책용품 > 리드줄 '</li></ul> | | 50 | <ul><li>'상품명 : LG유니참 감자&사막화 Zero 고양이패드 4매,10매, 다묘용 8매 중 택 옵션명 : 고양이패드 다묘용 8매 1팩 매핑_카테고리2 : Naverstore > lgcarepet브랜드스토어 > 데오토일렛 매핑_카테고리1 : 홈>강아지>배변용품;홈>데오토일렛>패드;홈>전체상품;(#M)홈>고양이>배변용품 '</li><li>'옵션명 : 0001 기본상품 상품명 : LG유니참 데오토일렛 소취 항균 패드 10P 향 x 4팩 '</li><li>'옵션명 : 상품명 : LG유니참 데오토일렛 자묘용 고양이 화장실 '</li></ul> | | 42 | <ul><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>고양이 간식>통살/소시지 옵션명 : 상품명 : [웁스] 닭가슴살 22g x 30개입 - 펫비투비 매핑_카테고리2 : MP > Naverstore > 반려동물용품 > 고양이용품 > 간식 > 통살/소시지 '</li><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>고양이 간식>통살/소시지 매핑_카테고리2 : Naverstore > 펫탭 > 브랜드직영관 > 고양이 간식 상품명 : 런치보니또 참치&가쯔오부시 20g 옵션명 : '</li><li>'상품명 : 1% 진짜참치 아이케어 22g 옵션명 : '</li></ul> | | 11 | <ul><li>'옵션명 : 상품명 : 자이목스 오라틴 드링킹워터 치약 115ml '</li><li>'옵션명 : 프라그오프 덴탈케어 파우더 420g 상품명 : 스웨덴 프로덴 플라그 오프 420g 대용량 프라그오프 덴탈케어 파우더 강아지 고양이 '</li><li>'상품명 : 88덴탈가드 뿌려먹는 잇몸 염증 치석제거 반려동물 구강영양제 매핑_카테고리1 : (#M)생활/건강>반려동물>강아지 건강/관리용품>구강청결제 매핑_카테고리2 : MP > Naverstore > 반려동물용품 > 강아지용품 > 건강/관리용품 > 영양제 옵션명 : '</li></ul> | | 103 | <ul><li>'옵션명 : 상품명 : 애견식기 헬로도기 젖병세트 75cc 매핑_카테고리1 : (#M)생활/건강>반려동물>패션용품>기타액세서리 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 패션용품 > 기타액세서리 '</li><li>'옵션명 : 그린(1295-2) 상품명 : 강아지 급체 과식 비만방지 밥그릇 다이어트 식단조절 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 패션용품 > 기타액세서리 매핑_카테고리1 : (#M)생활/건강>반려동물>패션용품>기타액세서리 '</li><li>'옵션명 : 코코 삐삐 체크 패딩 올인원/핑크_XL 상품명 : 코코스튜디오 고양이 강아지옷 삐삐 체크 패딩 올인원 매핑_카테고리1 : (#M)생활/건강>반려동물>패션용품>기타액세서리 매핑_카테고리2 : T200 > Naverstore > 반려동물용품 > 고양이용품 > 패션용품 > 올인원 '</li></ul> | | 40 | <ul><li>'옵션명 : 상품명 : 이나바 챠오츄르 참치 14g 4P SC-71 고양이간식 매핑_카테고리1 : 홈>🎁고양이간식🎁;(#M)홈>🎁고양이간식🎁>챠오츄르>챠오츄르 매핑_카테고리2 : Naverstore > 반려동물용품 > 고양이용품 > 간식 > 캔/파우치 '</li><li>'옵션명 : 쮸루쮸루_. 쮸루쮸루 (헤어볼) 30g 매핑_카테고리1 : 홈>전체상품;(#M)홈>고양이간식 상품명 : 테비 쮸루쮸루 고양이간식 30g 매핑_카테고리2 : Naverstore > 반려동물용품 > 고양이용품 > 간식 > 캔/파우치 '</li><li>'매핑_카테고리2 : Naverstore > 펫탭 > 브랜드직영관 > 고양이 간식 옵션명 : 캣스토랑 크레미 80g 1개 매핑_카테고리1 : (#M)생활/건강>반려동물>고양이 간식>캔/파우치 상품명 : 더캣 외식시리즈 캣스토랑 감바스 80g '</li></ul> | | 32 | <ul><li>'상품명 : 칼리 도넛 카우 강아지 라텍스 삑삑이 장난감 옵션명 : 플라밍고 TPR 테니스볼 강아지 치석제거 장난 '</li><li>'매핑_카테고리1 : (#M)홈>강아지>장난감 매핑_카테고리2 : Naverstore > lgcarepet브랜드스토어 > 전체상품 상품명 : 하츠 강아지 고양이 장난감 옵션명 : 4.강아지 장난감 듀라플레이 뼈다귀 1+1 '</li><li>'상품명 : [댄온라인] 디즈니 캐릭터 인형 로프장난감 옵션명 : 티거 '</li></ul> | | 20 | <ul><li>'상품명 : [웁스] [리뉴얼]그물망 배변판 소형2568945 5 옵션명 : 핑크색2568945 5 '</li><li>'상품명 : 망사배변판 참좋은 대형 매핑_카테고리1 : (#M)홈>생활/건강>반려동물>강아지 간식>육포/건조간식 매핑_카테고리2 : Naverstore > 반려동물용품 > 강아지용품 > 간식 > 육포/건조 옵션명 : 아이보리 '</li><li>'매핑_카테고리2 : Naverstore > 반려동물용품 > 강아지용품 > 간식 > 육포/건조 옵션명 : 그레이 매핑_카테고리1 : (#M)홈>생활/건강>반려동물>강아지 간식>육포/건조간식 상품명 : (참좋은) 망사배변판 (대형) '</li></ul> | | 68 | <ul><li>'상품명 : [산타마리아노벨라]반려동물용 데오도란트 - 무스치오 (데오도란트 알 프로퓨마 디 무스치오) 옵션명 : '</li><li>'매핑_카테고리2 : MP > Naverstore > 반려동물용품 > 강아지용품 > 사료 > 건식 옵션명 : 애니멀스힐로 배송되는 상품입니다._구매자 정보가 반영되어 후원명단에 작성됩니다. 상품명 : [유기견 사료후원] 담양 애니멀스힐 (구 동산쉼터) 후원전용상품 아지피아 20kg 매핑_카테고리1 : (#M)생활/건강>반려동물>강아지 사료>건식사료 '</li><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>강아지 간식>동결건조 간식 옵션명 : 매핑_카테고리2 : MP > Naverstore > 반려동물용품 > 강아지용품 > 간식 > 동결건조 상품명 : 트러스티푸드 가니쉬 동결건조 오크라 25g 강아지 간식 토퍼 '</li></ul> | | 123 | <ul><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>햄스터용품 매핑_카테고리2 : OPS > Naverstore > 반려동물용품 > 햄스터용품 상품명 : 타핏 뭉게뭉게 페이퍼 베딩 옵션명 : 믹스그린(900g) '</li><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>햄스터용품 옵션명 : 오로라펄(450g) 매핑_카테고리2 : OPS > Naverstore > 반려동물용품 > 햄스터용품 상품명 : 타핏 뭉게뭉게 페이퍼 베딩 '</li><li>'매핑_카테고리1 : (#M)생활/건강>반려동물>햄스터용품 상품명 : 타핏 뭉게뭉게 페이퍼 베딩 옵션명 : 화이트(450g) 매핑_카테고리2 : OPS > Naverstore > 반려동물용품 > 햄스터용품 '</li></ul> | ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 0.8774 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("mini1013/master_item_top_ps_flat") # Run inference preds = model("옵션명 : 01=1_XS 상품명 : 원피스 공주 고양이 멜빵 민소매 활 애견 조끼 강아지 드레스 인스 스트리머 투투 스커트 ") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:--------|:----| | Word count | 7 | 26.7504 | 57 | | Label | Training Sample Count | |:------|:----------------------| | 0 | 250 | | 1 | 250 | | 2 | 162 | | 3 | 250 | | 4 | 250 | | 5 | 250 | | 6 | 250 | | 7 | 250 | | 8 | 66 | | 9 | 250 | | 10 | 250 | | 11 | 52 | | 12 | 13 | | 13 | 250 | | 14 | 250 | | 15 | 163 | | 16 | 81 | | 17 | 93 | | 18 | 203 | | 19 | 135 | | 20 | 19 | | 21 | 217 | | 22 | 234 | | 23 | 250 | | 24 | 250 | | 25 | 209 | | 26 | 250 | | 27 | 250 | | 28 | 250 | | 29 | 200 | | 30 | 227 | | 31 | 250 | | 32 | 49 | | 33 | 250 | | 34 | 250 | | 35 | 250 | | 36 | 250 | | 37 | 250 | | 38 | 91 | | 39 | 250 | | 40 | 250 | | 41 | 240 | | 42 | 90 | | 43 | 250 | | 44 | 116 | | 45 | 250 | | 46 | 250 | | 47 | 122 | | 48 | 250 | | 49 | 184 | | 50 | 109 | | 51 | 145 | | 52 | 250 | | 53 | 250 | | 54 | 46 | | 55 | 250 | | 56 | 250 | | 57 | 250 | | 58 | 250 | | 59 | 250 | | 60 | 250 | | 61 | 250 | | 62 | 250 | | 63 | 250 | | 64 | 249 | | 65 | 193 | | 66 | 131 | | 67 | 29 | | 68 | 10 | | 69 | 194 | | 70 | 250 | | 71 | 250 | | 72 | 250 | | 73 | 250 | | 74 | 250 | | 75 | 250 | | 76 | 250 | | 77 | 250 | | 78 | 250 | | 79 | 250 | | 80 | 205 | | 81 | 250 | | 82 | 250 | | 83 | 250 | | 84 | 79 | | 85 | 250 | | 86 | 227 | | 87 | 152 | | 88 | 250 | | 89 | 250 | | 90 | 250 | | 91 | 198 | | 92 | 76 | | 93 | 250 | | 94 | 250 | | 95 | 250 | | 96 | 250 | | 97 | 195 | | 98 | 250 | | 99 | 231 | | 100 | 250 | | 101 | 196 | | 102 | 250 | | 103 | 250 | | 104 | 250 | | 105 | 250 | | 106 | 250 | | 107 | 250 | | 108 | 250 | | 109 | 250 | | 110 | 250 | | 111 | 250 | | 112 | 250 | | 113 | 250 | | 114 | 250 | | 115 | 250 | | 116 | 250 | | 117 | 250 | | 118 | 250 | | 119 | 250 | | 120 | 250 | | 121 | 250 | | 122 | 250 | | 123 | 14 | ### Training Hyperparameters - batch_size: (64, 64) - num_epochs: (100, 100) - max_steps: -1 - sampling_strategy: oversampling - num_iterations: 20 - body_learning_rate: (2e-05, 1e-05) - head_learning_rate: 0.01 - loss: BatchAllTripletLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - l2_weight: 0.01 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:-------:|:-----:|:-------------:|:---------------:| | 0.0024 | 1 | 0.1854 | - | | 0.1211 | 50 | 0.1348 | - | | 0.2421 | 100 | 0.1372 | - | | 0.3632 | 150 | 0.1223 | - | | 0.4843 | 200 | 0.1383 | - | | 0.6053 | 250 | 0.1376 | - | | 0.7264 | 300 | 0.1289 | - | | 0.8475 | 350 | 0.1441 | - | | 0.9685 | 400 | 0.1218 | - | | 1.0896 | 450 | 0.1303 | - | | 1.2107 | 500 | 0.1273 | - | | 1.3317 | 550 | 0.1256 | - | | 1.4528 | 600 | 0.1298 | - | | 1.5738 | 650 | 0.1352 | - | | 1.6949 | 700 | 0.1349 | - | | 1.8160 | 750 | 0.1305 | - | | 1.9370 | 800 | 0.1181 | - | | 2.0581 | 850 | 0.1372 | - | | 2.1792 | 900 | 0.1353 | - | | 2.3002 | 950 | 0.1288 | - | | 2.4213 | 1000 | 0.158 | - | | 2.5424 | 1050 | 0.1332 | - | | 2.6634 | 1100 | 0.1302 | - | | 2.7845 | 1150 | 0.1392 | - | | 2.9056 | 1200 | 0.1237 | - | | 3.0266 | 1250 | 0.1477 | - | | 3.1477 | 1300 | 0.1284 | - | | 3.2688 | 1350 | 0.1214 | - | | 3.3898 | 1400 | 0.1319 | - | | 3.5109 | 1450 | 0.1201 | - | | 3.6320 | 1500 | 0.1334 | - | | 3.7530 | 1550 | 0.144 | - | | 3.8741 | 1600 | 0.1343 | - | | 3.9952 | 1650 | 0.1314 | - | | 4.1162 | 1700 | 0.1268 | - | | 4.2373 | 1750 | 0.1338 | - | | 4.3584 | 1800 | 0.1339 | - | | 4.4794 | 1850 | 0.1275 | - | | 4.6005 | 1900 | 0.1225 | - | | 4.7215 | 1950 | 0.1249 | - | | 4.8426 | 2000 | 0.1289 | - | | 4.9637 | 2050 | 0.1206 | - | | 5.0847 | 2100 | 0.1309 | - | | 5.2058 | 2150 | 0.1206 | - | | 5.3269 | 2200 | 0.124 | - | | 5.4479 | 2250 | 0.1303 | - | | 5.5690 | 2300 | 0.1108 | - | | 5.6901 | 2350 | 0.1108 | - | | 5.8111 | 2400 | 0.1302 | - | | 5.9322 | 2450 | 0.1199 | - | | 6.0533 | 2500 | 0.1186 | - | | 6.1743 | 2550 | 0.1247 | - | | 6.2954 | 2600 | 0.1229 | - | | 6.4165 | 2650 | 0.124 | - | | 6.5375 | 2700 | 0.1282 | - | | 6.6586 | 2750 | 0.1201 | - | | 6.7797 | 2800 | 0.1196 | - | | 6.9007 | 2850 | 0.1277 | - | | 7.0218 | 2900 | 0.1256 | - | | 7.1429 | 2950 | 0.1207 | - | | 7.2639 | 3000 | 0.1155 | - | | 7.3850 | 3050 | 0.1259 | - | | 7.5061 | 3100 | 0.13 | - | | 7.6271 | 3150 | 0.1366 | - | | 7.7482 | 3200 | 0.131 | - | | 7.8692 | 3250 | 0.1289 | - | | 7.9903 | 3300 | 0.1321 | - | | 8.1114 | 3350 | 0.1187 | - | | 8.2324 | 3400 | 0.126 | - | | 8.3535 | 3450 | 0.1277 | - | | 8.4746 | 3500 | 0.1278 | - | | 8.5956 | 3550 | 0.1273 | - | | 8.7167 | 3600 | 0.118 | - | | 8.8378 | 3650 | 0.1186 | - | | 8.9588 | 3700 | 0.123 | - | | 9.0799 | 3750 | 0.1166 | - | | 9.2010 | 3800 | 0.1178 | - | | 9.3220 | 3850 | 0.1205 | - | | 9.4431 | 3900 | 0.1329 | - | | 9.5642 | 3950 | 0.1154 | - | | 9.6852 | 4000 | 0.1215 | - | | 9.8063 | 4050 | 0.125 | - | | 9.9274 | 4100 | 0.1198 | - | | 10.0484 | 4150 | 0.1111 | - | | 10.1695 | 4200 | 0.1221 | - | | 10.2906 | 4250 | 0.1239 | - | | 10.4116 | 4300 | 0.1248 | - | | 10.5327 | 4350 | 0.1064 | - | | 10.6538 | 4400 | 0.1202 | - | | 10.7748 | 4450 | 0.125 | - | | 10.8959 | 4500 | 0.1162 | - | | 11.0169 | 4550 | 0.1072 | - | | 11.1380 | 4600 | 0.1005 | - | | 11.2591 | 4650 | 0.1143 | - | | 11.3801 | 4700 | 0.1177 | - | | 11.5012 | 4750 | 0.1244 | - | | 11.6223 | 4800 | 0.1204 | - | | 11.7433 | 4850 | 0.1131 | - | | 11.8644 | 4900 | 0.1149 | - | | 11.9855 | 4950 | 0.1247 | - | | 12.1065 | 5000 | 0.1214 | - | | 12.2276 | 5050 | 0.1199 | - | | 12.3487 | 5100 | 0.1225 | - | | 12.4697 | 5150 | 0.1149 | - | | 12.5908 | 5200 | 0.1207 | - | | 12.7119 | 5250 | 0.1241 | - | | 12.8329 | 5300 | 0.1229 | - | | 12.9540 | 5350 | 0.1134 | - | | 13.0751 | 5400 | 0.1188 | - | | 13.1961 | 5450 | 0.1105 | - | | 13.3172 | 5500 | 0.1177 | - | | 13.4383 | 5550 | 0.1123 | - | | 13.5593 | 5600 | 0.1083 | - | | 13.6804 | 5650 | 0.1189 | - | | 13.8015 | 5700 | 0.1109 | - | | 13.9225 | 5750 | 0.1078 | - | | 14.0436 | 5800 | 0.1207 | - | | 14.1646 | 5850 | 0.1022 | - | | 14.2857 | 5900 | 0.1098 | - | | 14.4068 | 5950 | 0.116 | - | | 14.5278 | 6000 | 0.1157 | - | | 14.6489 | 6050 | 0.1199 | - | | 14.7700 | 6100 | 0.1118 | - | | 14.8910 | 6150 | 0.1044 | - | | 15.0121 | 6200 | 0.1096 | - | | 15.1332 | 6250 | 0.0925 | - | | 15.2542 | 6300 | 0.1015 | - | | 15.3753 | 6350 | 0.1117 | - | | 15.4964 | 6400 | 0.1116 | - | | 15.6174 | 6450 | 0.1183 | - | | 15.7385 | 6500 | 0.1206 | - | | 15.8596 | 6550 | 0.1104 | - | | 15.9806 | 6600 | 0.1148 | - | | 16.1017 | 6650 | 0.1095 | - | | 16.2228 | 6700 | 0.1168 | - | | 16.3438 | 6750 | 0.1115 | - | | 16.4649 | 6800 | 0.1042 | - | | 16.5860 | 6850 | 0.1074 | - | | 16.7070 | 6900 | 0.106 | - | | 16.8281 | 6950 | 0.1058 | - | | 16.9492 | 7000 | 0.1169 | - | | 17.0702 | 7050 | 0.1043 | - | | 17.1913 | 7100 | 0.0964 | - | | 17.3123 | 7150 | 0.0945 | - | | 17.4334 | 7200 | 0.1143 | - | | 17.5545 | 7250 | 0.0938 | - | | 17.6755 | 7300 | 0.0982 | - | | 17.7966 | 7350 | 0.1055 | - | | 17.9177 | 7400 | 0.1122 | - | | 18.0387 | 7450 | 0.0897 | - | | 18.1598 | 7500 | 0.1044 | - | | 18.2809 | 7550 | 0.1052 | - | | 18.4019 | 7600 | 0.0934 | - | | 18.5230 | 7650 | 0.0978 | - | | 18.6441 | 7700 | 0.0952 | - | | 18.7651 | 7750 | 0.1081 | - | | 18.8862 | 7800 | 0.1042 | - | | 19.0073 | 7850 | 0.1099 | - | | 19.1283 | 7900 | 0.1112 | - | | 19.2494 | 7950 | 0.111 | - | | 19.3705 | 8000 | 0.1016 | - | | 19.4915 | 8050 | 0.1005 | - | | 19.6126 | 8100 | 0.0838 | - | | 19.7337 | 8150 | 0.095 | - | | 19.8547 | 8200 | 0.1194 | - | | 19.9758 | 8250 | 0.0993 | - | | 20.0969 | 8300 | 0.0969 | - | | 20.2179 | 8350 | 0.0954 | - | | 20.3390 | 8400 | 0.1096 | - | | 20.4600 | 8450 | 0.0979 | - | | 20.5811 | 8500 | 0.0915 | - | | 20.7022 | 8550 | 0.1042 | - | | 20.8232 | 8600 | 0.1141 | - | | 20.9443 | 8650 | 0.0894 | - | | 21.0654 | 8700 | 0.0971 | - | | 21.1864 | 8750 | 0.0835 | - | | 21.3075 | 8800 | 0.0957 | - | | 21.4286 | 8850 | 0.0866 | - | | 21.5496 | 8900 | 0.0832 | - | | 21.6707 | 8950 | 0.0786 | - | | 21.7918 | 9000 | 0.0851 | - | | 21.9128 | 9050 | 0.1033 | - | | 22.0339 | 9100 | 0.1017 | - | | 22.1550 | 9150 | 0.0878 | - | | 22.2760 | 9200 | 0.0932 | - | | 22.3971 | 9250 | 0.0905 | - | | 22.5182 | 9300 | 0.0821 | - | | 22.6392 | 9350 | 0.1042 | - | | 22.7603 | 9400 | 0.0866 | - | | 22.8814 | 9450 | 0.102 | - | | 23.0024 | 9500 | 0.0929 | - | | 23.1235 | 9550 | 0.0923 | - | | 23.2446 | 9600 | 0.083 | - | | 23.3656 | 9650 | 0.1073 | - | | 23.4867 | 9700 | 0.0949 | - | | 23.6077 | 9750 | 0.0938 | - | | 23.7288 | 9800 | 0.0946 | - | | 23.8499 | 9850 | 0.1085 | - | | 23.9709 | 9900 | 0.0941 | - | | 24.0920 | 9950 | 0.0842 | - | | 24.2131 | 10000 | 0.0773 | - | | 24.3341 | 10050 | 0.0883 | - | | 24.4552 | 10100 | 0.1143 | - | | 24.5763 | 10150 | 0.0972 | - | | 24.6973 | 10200 | 0.077 | - | | 24.8184 | 10250 | 0.0901 | - | | 24.9395 | 10300 | 0.0899 | - | | 25.0605 | 10350 | 0.0895 | - | | 25.1816 | 10400 | 0.0775 | - | | 25.3027 | 10450 | 0.0804 | - | | 25.4237 | 10500 | 0.0901 | - | | 25.5448 | 10550 | 0.0864 | - | | 25.6659 | 10600 | 0.1167 | - | | 25.7869 | 10650 | 0.0956 | - | | 25.9080 | 10700 | 0.0882 | - | | 26.0291 | 10750 | 0.0868 | - | | 26.1501 | 10800 | 0.0798 | - | | 26.2712 | 10850 | 0.0878 | - | | 26.3923 | 10900 | 0.0944 | - | | 26.5133 | 10950 | 0.0813 | - | | 26.6344 | 11000 | 0.0858 | - | | 26.7554 | 11050 | 0.0775 | - | | 26.8765 | 11100 | 0.0882 | - | | 26.9976 | 11150 | 0.0917 | - | | 27.1186 | 11200 | 0.0838 | - | | 27.2397 | 11250 | 0.1066 | - | | 27.3608 | 11300 | 0.0899 | - | | 27.4818 | 11350 | 0.095 | - | | 27.6029 | 11400 | 0.0684 | - | | 27.7240 | 11450 | 0.0975 | - | | 27.8450 | 11500 | 0.0752 | - | | 27.9661 | 11550 | 0.0694 | - | | 28.0872 | 11600 | 0.0597 | - | | 28.2082 | 11650 | 0.066 | - | | 28.3293 | 11700 | 0.0545 | - | | 28.4504 | 11750 | 0.0787 | - | | 28.5714 | 11800 | 0.0877 | - | | 28.6925 | 11850 | 0.0959 | - | | 28.8136 | 11900 | 0.0748 | - | | 28.9346 | 11950 | 0.0759 | - | | 29.0557 | 12000 | 0.0722 | - | | 29.1768 | 12050 | 0.0836 | - | | 29.2978 | 12100 | 0.0965 | - | | 29.4189 | 12150 | 0.0679 | - | | 29.5400 | 12200 | 0.0699 | - | | 29.6610 | 12250 | 0.0895 | - | | 29.7821 | 12300 | 0.0593 | - | | 29.9031 | 12350 | 0.0686 | - | | 30.0242 | 12400 | 0.0694 | - | | 30.1453 | 12450 | 0.0563 | - | | 30.2663 | 12500 | 0.069 | - | | 30.3874 | 12550 | 0.0662 | - | | 30.5085 | 12600 | 0.0706 | - | | 30.6295 | 12650 | 0.0671 | - | | 30.7506 | 12700 | 0.051 | - | | 30.8717 | 12750 | 0.0735 | - | | 30.9927 | 12800 | 0.0712 | - | | 31.1138 | 12850 | 0.0581 | - | | 31.2349 | 12900 | 0.0758 | - | | 31.3559 | 12950 | 0.0594 | - | | 31.4770 | 13000 | 0.0607 | - | | 31.5981 | 13050 | 0.0577 | - | | 31.7191 | 13100 | 0.0878 | - | | 31.8402 | 13150 | 0.0833 | - | | 31.9613 | 13200 | 0.0776 | - | | 32.0823 | 13250 | 0.0765 | - | | 32.2034 | 13300 | 0.048 | - | | 32.3245 | 13350 | 0.0622 | - | | 32.4455 | 13400 | 0.0628 | - | | 32.5666 | 13450 | 0.071 | - | | 32.6877 | 13500 | 0.0707 | - | | 32.8087 | 13550 | 0.0725 | - | | 32.9298 | 13600 | 0.065 | - | | 33.0508 | 13650 | 0.0726 | - | | 33.1719 | 13700 | 0.0397 | - | | 33.2930 | 13750 | 0.0571 | - | | 33.4140 | 13800 | 0.0726 | - | | 33.5351 | 13850 | 0.0711 | - | | 33.6562 | 13900 | 0.0538 | - | | 33.7772 | 13950 | 0.0469 | - | | 33.8983 | 14000 | 0.0475 | - | | 34.0194 | 14050 | 0.0654 | - | | 34.1404 | 14100 | 0.0594 | - | | 34.2615 | 14150 | 0.049 | - | | 34.3826 | 14200 | 0.0713 | - | | 34.5036 | 14250 | 0.0616 | - | | 34.6247 | 14300 | 0.083 | - | | 34.7458 | 14350 | 0.0689 | - | | 34.8668 | 14400 | 0.0868 | - | | 34.9879 | 14450 | 0.0649 | - | | 35.1090 | 14500 | 0.0608 | - | | 35.2300 | 14550 | 0.0788 | - | | 35.3511 | 14600 | 0.0571 | - | | 35.4722 | 14650 | 0.0364 | - | | 35.5932 | 14700 | 0.0741 | - | | 35.7143 | 14750 | 0.0372 | - | | 35.8354 | 14800 | 0.0606 | - | | 35.9564 | 14850 | 0.0679 | - | | 36.0775 | 14900 | 0.0569 | - | | 36.1985 | 14950 | 0.0526 | - | | 36.3196 | 15000 | 0.046 | - | | 36.4407 | 15050 | 0.056 | - | | 36.5617 | 15100 | 0.052 | - | | 36.6828 | 15150 | 0.0571 | - | | 36.8039 | 15200 | 0.0668 | - | | 36.9249 | 15250 | 0.0679 | - | | 37.0460 | 15300 | 0.0495 | - | | 37.1671 | 15350 | 0.0447 | - | | 37.2881 | 15400 | 0.0592 | - | | 37.4092 | 15450 | 0.0492 | - | | 37.5303 | 15500 | 0.0573 | - | | 37.6513 | 15550 | 0.042 | - | | 37.7724 | 15600 | 0.052 | - | | 37.8935 | 15650 | 0.0684 | - | | 38.0145 | 15700 | 0.0394 | - | | 38.1356 | 15750 | 0.0438 | - | | 38.2567 | 15800 | 0.0296 | - | | 38.3777 | 15850 | 0.0526 | - | | 38.4988 | 15900 | 0.0456 | - | | 38.6199 | 15950 | 0.0391 | - | | 38.7409 | 16000 | 0.0867 | - | | 38.8620 | 16050 | 0.0522 | - | | 38.9831 | 16100 | 0.0414 | - | | 39.1041 | 16150 | 0.0569 | - | | 39.2252 | 16200 | 0.0696 | - | | 39.3462 | 16250 | 0.0379 | - | | 39.4673 | 16300 | 0.0562 | - | | 39.5884 | 16350 | 0.0429 | - | | 39.7094 | 16400 | 0.037 | - | | 39.8305 | 16450 | 0.0533 | - | | 39.9516 | 16500 | 0.0445 | - | | 40.0726 | 16550 | 0.0487 | - | | 40.1937 | 16600 | 0.0605 | - | | 40.3148 | 16650 | 0.066 | - | | 40.4358 | 16700 | 0.0534 | - | | 40.5569 | 16750 | 0.0632 | - | | 40.6780 | 16800 | 0.0712 | - | | 40.7990 | 16850 | 0.0434 | - | | 40.9201 | 16900 | 0.0464 | - | | 41.0412 | 16950 | 0.0353 | - | | 41.1622 | 17000 | 0.0323 | - | | 41.2833 | 17050 | 0.0466 | - | | 41.4044 | 17100 | 0.0608 | - | | 41.5254 | 17150 | 0.0483 | - | | 41.6465 | 17200 | 0.0377 | - | | 41.7676 | 17250 | 0.0463 | - | | 41.8886 | 17300 | 0.0375 | - | | 42.0097 | 17350 | 0.0618 | - | | 42.1308 | 17400 | 0.0455 | - | | 42.2518 | 17450 | 0.0466 | - | | 42.3729 | 17500 | 0.0342 | - | | 42.4939 | 17550 | 0.0393 | - | | 42.6150 | 17600 | 0.0425 | - | | 42.7361 | 17650 | 0.0633 | - | | 42.8571 | 17700 | 0.0379 | - | | 42.9782 | 17750 | 0.0562 | - | | 43.0993 | 17800 | 0.0226 | - | | 43.2203 | 17850 | 0.0441 | - | | 43.3414 | 17900 | 0.034 | - | | 43.4625 | 17950 | 0.0342 | - | | 43.5835 | 18000 | 0.0625 | - | | 43.7046 | 18050 | 0.0443 | - | | 43.8257 | 18100 | 0.0434 | - | | 43.9467 | 18150 | 0.0349 | - | | 44.0678 | 18200 | 0.0675 | - | | 44.1889 | 18250 | 0.0397 | - | | 44.3099 | 18300 | 0.0271 | - | | 44.4310 | 18350 | 0.0284 | - | | 44.5521 | 18400 | 0.0379 | - | | 44.6731 | 18450 | 0.0351 | - | | 44.7942 | 18500 | 0.0346 | - | | 44.9153 | 18550 | 0.0381 | - | | 45.0363 | 18600 | 0.0584 | - | | 45.1574 | 18650 | 0.0444 | - | | 45.2785 | 18700 | 0.0384 | - | | 45.3995 | 18750 | 0.0468 | - | | 45.5206 | 18800 | 0.0487 | - | | 45.6416 | 18850 | 0.0303 | - | | 45.7627 | 18900 | 0.0351 | - | | 45.8838 | 18950 | 0.0214 | - | | 46.0048 | 19000 | 0.0337 | - | | 46.1259 | 19050 | 0.0478 | - | | 46.2470 | 19100 | 0.045 | - | | 46.3680 | 19150 | 0.0399 | - | | 46.4891 | 19200 | 0.0324 | - | | 46.6102 | 19250 | 0.0433 | - | | 46.7312 | 19300 | 0.0524 | - | | 46.8523 | 19350 | 0.0431 | - | | 46.9734 | 19400 | 0.0308 | - | | 47.0944 | 19450 | 0.0338 | - | | 47.2155 | 19500 | 0.0395 | - | | 47.3366 | 19550 | 0.0421 | - | | 47.4576 | 19600 | 0.0404 | - | | 47.5787 | 19650 | 0.021 | - | | 47.6998 | 19700 | 0.0399 | - | | 47.8208 | 19750 | 0.0397 | - | | 47.9419 | 19800 | 0.0416 | - | | 48.0630 | 19850 | 0.0371 | - | | 48.1840 | 19900 | 0.027 | - | | 48.3051 | 19950 | 0.0363 | - | | 48.4262 | 20000 | 0.0262 | - | | 48.5472 | 20050 | 0.0314 | - | | 48.6683 | 20100 | 0.0398 | - | | 48.7893 | 20150 | 0.0434 | - | | 48.9104 | 20200 | 0.0455 | - | | 49.0315 | 20250 | 0.0314 | - | | 49.1525 | 20300 | 0.0322 | - | | 49.2736 | 20350 | 0.0166 | - | | 49.3947 | 20400 | 0.0295 | - | | 49.5157 | 20450 | 0.0405 | - | | 49.6368 | 20500 | 0.0349 | - | | 49.7579 | 20550 | 0.0378 | - | | 49.8789 | 20600 | 0.0566 | - | | 50.0 | 20650 | 0.0542 | - | | 50.1211 | 20700 | 0.0194 | - | | 50.2421 | 20750 | 0.0462 | - | | 50.3632 | 20800 | 0.0432 | - | | 50.4843 | 20850 | 0.0295 | - | | 50.6053 | 20900 | 0.0293 | - | | 50.7264 | 20950 | 0.0226 | - | | 50.8475 | 21000 | 0.0446 | - | | 50.9685 | 21050 | 0.0181 | - | | 51.0896 | 21100 | 0.0226 | - | | 51.2107 | 21150 | 0.0141 | - | | 51.3317 | 21200 | 0.0253 | - | | 51.4528 | 21250 | 0.0479 | - | | 51.5738 | 21300 | 0.0197 | - | | 51.6949 | 21350 | 0.0387 | - | | 51.8160 | 21400 | 0.0391 | - | | 51.9370 | 21450 | 0.0289 | - | | 52.0581 | 21500 | 0.0336 | - | | 52.1792 | 21550 | 0.02 | - | | 52.3002 | 21600 | 0.0203 | - | | 52.4213 | 21650 | 0.034 | - | | 52.5424 | 21700 | 0.0338 | - | | 52.6634 | 21750 | 0.0253 | - | | 52.7845 | 21800 | 0.0423 | - | | 52.9056 | 21850 | 0.0427 | - | | 53.0266 | 21900 | 0.0322 | - | | 53.1477 | 21950 | 0.0169 | - | | 53.2688 | 22000 | 0.0101 | - | | 53.3898 | 22050 | 0.0349 | - | | 53.5109 | 22100 | 0.0338 | - | | 53.6320 | 22150 | 0.0573 | - | | 53.7530 | 22200 | 0.0235 | - | | 53.8741 | 22250 | 0.0357 | - | | 53.9952 | 22300 | 0.0348 | - | | 54.1162 | 22350 | 0.0289 | - | | 54.2373 | 22400 | 0.0299 | - | | 54.3584 | 22450 | 0.0339 | - | | 54.4794 | 22500 | 0.0237 | - | | 54.6005 | 22550 | 0.0261 | - | | 54.7215 | 22600 | 0.0265 | - | | 54.8426 | 22650 | 0.0192 | - | | 54.9637 | 22700 | 0.0205 | - | | 55.0847 | 22750 | 0.0302 | - | | 55.2058 | 22800 | 0.0289 | - | | 55.3269 | 22850 | 0.0188 | - | | 55.4479 | 22900 | 0.0204 | - | | 55.5690 | 22950 | 0.0404 | - | | 55.6901 | 23000 | 0.0416 | - | | 55.8111 | 23050 | 0.025 | - | | 55.9322 | 23100 | 0.0427 | - | | 56.0533 | 23150 | 0.0326 | - | | 56.1743 | 23200 | 0.0299 | - | | 56.2954 | 23250 | 0.0362 | - | | 56.4165 | 23300 | 0.0413 | - | | 56.5375 | 23350 | 0.029 | - | | 56.6586 | 23400 | 0.0349 | - | | 56.7797 | 23450 | 0.0372 | - | | 56.9007 | 23500 | 0.0227 | - | | 57.0218 | 23550 | 0.0195 | - | | 57.1429 | 23600 | 0.0202 | - | | 57.2639 | 23650 | 0.0223 | - | | 57.3850 | 23700 | 0.0282 | - | | 57.5061 | 23750 | 0.0168 | - | | 57.6271 | 23800 | 0.0136 | - | | 57.7482 | 23850 | 0.0276 | - | | 57.8692 | 23900 | 0.0283 | - | | 57.9903 | 23950 | 0.0344 | - | | 58.1114 | 24000 | 0.0162 | - | | 58.2324 | 24050 | 0.0241 | - | | 58.3535 | 24100 | 0.0279 | - | | 58.4746 | 24150 | 0.0346 | - | | 58.5956 | 24200 | 0.0356 | - | | 58.7167 | 24250 | 0.0322 | - | | 58.8378 | 24300 | 0.0309 | - | | 58.9588 | 24350 | 0.0224 | - | | 59.0799 | 24400 | 0.0138 | - | | 59.2010 | 24450 | 0.041 | - | | 59.3220 | 24500 | 0.0248 | - | | 59.4431 | 24550 | 0.0279 | - | | 59.5642 | 24600 | 0.0242 | - | | 59.6852 | 24650 | 0.0321 | - | | 59.8063 | 24700 | 0.0188 | - | | 59.9274 | 24750 | 0.0269 | - | | 60.0484 | 24800 | 0.0215 | - | | 60.1695 | 24850 | 0.0186 | - | | 60.2906 | 24900 | 0.0229 | - | | 60.4116 | 24950 | 0.0284 | - | | 60.5327 | 25000 | 0.0263 | - | | 60.6538 | 25050 | 0.0226 | - | | 60.7748 | 25100 | 0.0182 | - | | 60.8959 | 25150 | 0.0095 | - | | 61.0169 | 25200 | 0.0199 | - | | 61.1380 | 25250 | 0.0164 | - | | 61.2591 | 25300 | 0.023 | - | | 61.3801 | 25350 | 0.029 | - | | 61.5012 | 25400 | 0.0229 | - | | 61.6223 | 25450 | 0.023 | - | | 61.7433 | 25500 | 0.0189 | - | | 61.8644 | 25550 | 0.0268 | - | | 61.9855 | 25600 | 0.0313 | - | | 62.1065 | 25650 | 0.0121 | - | | 62.2276 | 25700 | 0.0187 | - | | 62.3487 | 25750 | 0.0126 | - | | 62.4697 | 25800 | 0.0404 | - | | 62.5908 | 25850 | 0.0238 | - | | 62.7119 | 25900 | 0.026 | - | | 62.8329 | 25950 | 0.0176 | - | | 62.9540 | 26000 | 0.0153 | - | | 63.0751 | 26050 | 0.0223 | - | | 63.1961 | 26100 | 0.013 | - | | 63.3172 | 26150 | 0.0177 | - | | 63.4383 | 26200 | 0.0111 | - | | 63.5593 | 26250 | 0.0185 | - | | 63.6804 | 26300 | 0.0179 | - | | 63.8015 | 26350 | 0.0111 | - | | 63.9225 | 26400 | 0.0258 | - | | 64.0436 | 26450 | 0.0199 | - | | 64.1646 | 26500 | 0.0171 | - | | 64.2857 | 26550 | 0.0169 | - | | 64.4068 | 26600 | 0.0154 | - | | 64.5278 | 26650 | 0.0106 | - | | 64.6489 | 26700 | 0.0071 | - | | 64.7700 | 26750 | 0.0151 | - | | 64.8910 | 26800 | 0.0335 | - | | 65.0121 | 26850 | 0.0184 | - | | 65.1332 | 26900 | 0.0164 | - | | 65.2542 | 26950 | 0.0178 | - | | 65.3753 | 27000 | 0.0119 | - | | 65.4964 | 27050 | 0.0113 | - | | 65.6174 | 27100 | 0.0167 | - | | 65.7385 | 27150 | 0.0163 | - | | 65.8596 | 27200 | 0.0212 | - | | 65.9806 | 27250 | 0.032 | - | | 66.1017 | 27300 | 0.0264 | - | | 66.2228 | 27350 | 0.0158 | - | | 66.3438 | 27400 | 0.0269 | - | | 66.4649 | 27450 | 0.0115 | - | | 66.5860 | 27500 | 0.0135 | - | | 66.7070 | 27550 | 0.0067 | - | | 66.8281 | 27600 | 0.0135 | - | | 66.9492 | 27650 | 0.0176 | - | | 67.0702 | 27700 | 0.0236 | - | | 67.1913 | 27750 | 0.0142 | - | | 67.3123 | 27800 | 0.0262 | - | | 67.4334 | 27850 | 0.0173 | - | | 67.5545 | 27900 | 0.0133 | - | | 67.6755 | 27950 | 0.011 | - | | 67.7966 | 28000 | 0.0193 | - | | 67.9177 | 28050 | 0.0251 | - | | 68.0387 | 28100 | 0.0221 | - | | 68.1598 | 28150 | 0.0176 | - | | 68.2809 | 28200 | 0.01 | - | | 68.4019 | 28250 | 0.0171 | - | | 68.5230 | 28300 | 0.0209 | - | | 68.6441 | 28350 | 0.0285 | - | | 68.7651 | 28400 | 0.0316 | - | | 68.8862 | 28450 | 0.0067 | - | | 69.0073 | 28500 | 0.0103 | - | | 69.1283 | 28550 | 0.0193 | - | | 69.2494 | 28600 | 0.0172 | - | | 69.3705 | 28650 | 0.0165 | - | | 69.4915 | 28700 | 0.0079 | - | | 69.6126 | 28750 | 0.0187 | - | | 69.7337 | 28800 | 0.0269 | - | | 69.8547 | 28850 | 0.0154 | - | | 69.9758 | 28900 | 0.0059 | - | | 70.0969 | 28950 | 0.0186 | - | | 70.2179 | 29000 | 0.0144 | - | | 70.3390 | 29050 | 0.0158 | - | | 70.4600 | 29100 | 0.0329 | - | | 70.5811 | 29150 | 0.0207 | - | | 70.7022 | 29200 | 0.0192 | - | | 70.8232 | 29250 | 0.0132 | - | | 70.9443 | 29300 | 0.0297 | - | | 71.0654 | 29350 | 0.0248 | - | | 71.1864 | 29400 | 0.0191 | - | | 71.3075 | 29450 | 0.0101 | - | | 71.4286 | 29500 | 0.0027 | - | | 71.5496 | 29550 | 0.0158 | - | | 71.6707 | 29600 | 0.013 | - | | 71.7918 | 29650 | 0.0061 | - | | 71.9128 | 29700 | 0.0055 | - | | 72.0339 | 29750 | 0.0219 | - | | 72.1550 | 29800 | 0.0189 | - | | 72.2760 | 29850 | 0.0227 | - | | 72.3971 | 29900 | 0.0161 | - | | 72.5182 | 29950 | 0.0168 | - | | 72.6392 | 30000 | 0.018 | - | | 72.7603 | 30050 | 0.0122 | - | | 72.8814 | 30100 | 0.0152 | - | | 73.0024 | 30150 | 0.0074 | - | | 73.1235 | 30200 | 0.0024 | - | | 73.2446 | 30250 | 0.0086 | - | | 73.3656 | 30300 | 0.0028 | - | | 73.4867 | 30350 | 0.0104 | - | | 73.6077 | 30400 | 0.0144 | - | | 73.7288 | 30450 | 0.0125 | - | | 73.8499 | 30500 | 0.0248 | - | | 73.9709 | 30550 | 0.0174 | - | | 74.0920 | 30600 | 0.0063 | - | | 74.2131 | 30650 | 0.0146 | - | | 74.3341 | 30700 | 0.016 | - | | 74.4552 | 30750 | 0.0145 | - | | 74.5763 | 30800 | 0.0058 | - | | 74.6973 | 30850 | 0.0114 | - | | 74.8184 | 30900 | 0.0104 | - | | 74.9395 | 30950 | 0.0277 | - | | 75.0605 | 31000 | 0.0034 | - | | 75.1816 | 31050 | 0.0111 | - | | 75.3027 | 31100 | 0.0149 | - | | 75.4237 | 31150 | 0.0053 | - | | 75.5448 | 31200 | 0.008 | - | | 75.6659 | 31250 | 0.013 | - | | 75.7869 | 31300 | 0.0151 | - | | 75.9080 | 31350 | 0.0198 | - | | 76.0291 | 31400 | 0.013 | - | | 76.1501 | 31450 | 0.0086 | - | | 76.2712 | 31500 | 0.0028 | - | | 76.3923 | 31550 | 0.0115 | - | | 76.5133 | 31600 | 0.0295 | - | | 76.6344 | 31650 | 0.0105 | - | | 76.7554 | 31700 | 0.0098 | - | | 76.8765 | 31750 | 0.0187 | - | | 76.9976 | 31800 | 0.0133 | - | | 77.1186 | 31850 | 0.0075 | - | | 77.2397 | 31900 | 0.017 | - | | 77.3608 | 31950 | 0.0112 | - | | 77.4818 | 32000 | 0.006 | - | | 77.6029 | 32050 | 0.0093 | - | | 77.7240 | 32100 | 0.0166 | - | | 77.8450 | 32150 | 0.0036 | - | | 77.9661 | 32200 | 0.0109 | - | | 78.0872 | 32250 | 0.0137 | - | | 78.2082 | 32300 | 0.0051 | - | | 78.3293 | 32350 | 0.0088 | - | | 78.4504 | 32400 | 0.0127 | - | | 78.5714 | 32450 | 0.021 | - | | 78.6925 | 32500 | 0.011 | - | | 78.8136 | 32550 | 0.0101 | - | | 78.9346 | 32600 | 0.017 | - | | 79.0557 | 32650 | 0.0042 | - | | 79.1768 | 32700 | 0.0078 | - | | 79.2978 | 32750 | 0.0 | - | | 79.4189 | 32800 | 0.015 | - | | 79.5400 | 32850 | 0.0023 | - | | 79.6610 | 32900 | 0.0 | - | | 79.7821 | 32950 | 0.0024 | - | | 79.9031 | 33000 | 0.0087 | - | | 80.0242 | 33050 | 0.0166 | - | | 80.1453 | 33100 | 0.0007 | - | | 80.2663 | 33150 | 0.0084 | - | | 80.3874 | 33200 | 0.0086 | - | | 80.5085 | 33250 | 0.0103 | - | | 80.6295 | 33300 | 0.0121 | - | | 80.7506 | 33350 | 0.0042 | - | | 80.8717 | 33400 | 0.0042 | - | | 80.9927 | 33450 | 0.0021 | - | | 81.1138 | 33500 | 0.0041 | - | | 81.2349 | 33550 | 0.0141 | - | | 81.3559 | 33600 | 0.0144 | - | | 81.4770 | 33650 | 0.0172 | - | | 81.5981 | 33700 | 0.0077 | - | | 81.7191 | 33750 | 0.0112 | - | | 81.8402 | 33800 | 0.0109 | - | | 81.9613 | 33850 | 0.009 | - | | 82.0823 | 33900 | 0.004 | - | | 82.2034 | 33950 | 0.0034 | - | | 82.3245 | 34000 | 0.0019 | - | | 82.4455 | 34050 | 0.011 | - | | 82.5666 | 34100 | 0.0058 | - | | 82.6877 | 34150 | 0.0091 | - | | 82.8087 | 34200 | 0.0069 | - | | 82.9298 | 34250 | 0.0047 | - | | 83.0508 | 34300 | 0.015 | - | | 83.1719 | 34350 | 0.0029 | - | | 83.2930 | 34400 | 0.0197 | - | | 83.4140 | 34450 | 0.0063 | - | | 83.5351 | 34500 | 0.0121 | - | | 83.6562 | 34550 | 0.0091 | - | | 83.7772 | 34600 | 0.0096 | - | | 83.8983 | 34650 | 0.0077 | - | | 84.0194 | 34700 | 0.0097 | - | | 84.1404 | 34750 | 0.0028 | - | | 84.2615 | 34800 | 0.0006 | - | | 84.3826 | 34850 | 0.0063 | - | | 84.5036 | 34900 | 0.007 | - | | 84.6247 | 34950 | 0.001 | - | | 84.7458 | 35000 | 0.0069 | - | | 84.8668 | 35050 | 0.0043 | - | | 84.9879 | 35100 | 0.0068 | - | | 85.1090 | 35150 | 0.0069 | - | | 85.2300 | 35200 | 0.01 | - | | 85.3511 | 35250 | 0.0067 | - | | 85.4722 | 35300 | 0.0 | - | | 85.5932 | 35350 | 0.0014 | - | | 85.7143 | 35400 | 0.0038 | - | | 85.8354 | 35450 | 0.0019 | - | | 85.9564 | 35500 | 0.0057 | - | | 86.0775 | 35550 | 0.0077 | - | | 86.1985 | 35600 | 0.0067 | - | | 86.3196 | 35650 | 0.0133 | - | | 86.4407 | 35700 | 0.0152 | - | | 86.5617 | 35750 | 0.0023 | - | | 86.6828 | 35800 | 0.0155 | - | | 86.8039 | 35850 | 0.011 | - | | 86.9249 | 35900 | 0.0076 | - | | 87.0460 | 35950 | 0.0153 | - | | 87.1671 | 36000 | 0.0026 | - | | 87.2881 | 36050 | 0.0115 | - | | 87.4092 | 36100 | 0.0045 | - | | 87.5303 | 36150 | 0.0016 | - | | 87.6513 | 36200 | 0.0116 | - | | 87.7724 | 36250 | 0.0018 | - | | 87.8935 | 36300 | 0.0105 | - | | 88.0145 | 36350 | 0.0119 | - | | 88.1356 | 36400 | 0.0099 | - | | 88.2567 | 36450 | 0.0076 | - | | 88.3777 | 36500 | 0.0143 | - | | 88.4988 | 36550 | 0.0067 | - | | 88.6199 | 36600 | 0.0067 | - | | 88.7409 | 36650 | 0.0097 | - | | 88.8620 | 36700 | 0.0004 | - | | 88.9831 | 36750 | 0.004 | - | | 89.1041 | 36800 | 0.0 | - | | 89.2252 | 36850 | 0.0132 | - | | 89.3462 | 36900 | 0.0038 | - | | 89.4673 | 36950 | 0.0042 | - | | 89.5884 | 37000 | 0.0039 | - | | 89.7094 | 37050 | 0.003 | - | | 89.8305 | 37100 | 0.0013 | - | | 89.9516 | 37150 | 0.0007 | - | | 90.0726 | 37200 | 0.0002 | - | | 90.1937 | 37250 | 0.004 | - | | 90.3148 | 37300 | 0.0075 | - | | 90.4358 | 37350 | 0.0 | - | | 90.5569 | 37400 | 0.0071 | - | | 90.6780 | 37450 | 0.0 | - | | 90.7990 | 37500 | 0.0021 | - | | 90.9201 | 37550 | 0.0064 | - | | 91.0412 | 37600 | 0.0036 | - | | 91.1622 | 37650 | 0.0049 | - | | 91.2833 | 37700 | 0.0042 | - | | 91.4044 | 37750 | 0.0072 | - | | 91.5254 | 37800 | 0.0072 | - | | 91.6465 | 37850 | 0.0126 | - | | 91.7676 | 37900 | 0.0027 | - | | 91.8886 | 37950 | 0.0074 | - | | 92.0097 | 38000 | 0.0046 | - | | 92.1308 | 38050 | 0.0115 | - | | 92.2518 | 38100 | 0.0 | - | | 92.3729 | 38150 | 0.0098 | - | | 92.4939 | 38200 | 0.002 | - | | 92.6150 | 38250 | 0.0018 | - | | 92.7361 | 38300 | 0.0039 | - | | 92.8571 | 38350 | 0.0069 | - | | 92.9782 | 38400 | 0.0021 | - | | 93.0993 | 38450 | 0.0053 | - | | 93.2203 | 38500 | 0.0002 | - | | 93.3414 | 38550 | 0.0079 | - | | 93.4625 | 38600 | 0.0006 | - | | 93.5835 | 38650 | 0.0054 | - | | 93.7046 | 38700 | 0.0062 | - | | 93.8257 | 38750 | 0.0006 | - | | 93.9467 | 38800 | 0.0107 | - | | 94.0678 | 38850 | 0.0059 | - | | 94.1889 | 38900 | 0.0091 | - | | 94.3099 | 38950 | 0.0 | - | | 94.4310 | 39000 | 0.0018 | - | | 94.5521 | 39050 | 0.0058 | - | | 94.6731 | 39100 | 0.0031 | - | | 94.7942 | 39150 | 0.0011 | - | | 94.9153 | 39200 | 0.0003 | - | | 95.0363 | 39250 | 0.012 | - | | 95.1574 | 39300 | 0.0039 | - | | 95.2785 | 39350 | 0.0025 | - | | 95.3995 | 39400 | 0.0007 | - | | 95.5206 | 39450 | 0.0029 | - | | 95.6416 | 39500 | 0.0065 | - | | 95.7627 | 39550 | 0.0 | - | | 95.8838 | 39600 | 0.0034 | - | | 96.0048 | 39650 | 0.0034 | - | | 96.1259 | 39700 | 0.0015 | - | | 96.2470 | 39750 | 0.0047 | - | | 96.3680 | 39800 | 0.005 | - | | 96.4891 | 39850 | 0.0032 | - | | 96.6102 | 39900 | 0.0004 | - | | 96.7312 | 39950 | 0.0015 | - | | 96.8523 | 40000 | 0.0027 | - | | 96.9734 | 40050 | 0.0059 | - | | 97.0944 | 40100 | 0.0013 | - | | 97.2155 | 40150 | 0.0003 | - | | 97.3366 | 40200 | 0.0011 | - | | 97.4576 | 40250 | 0.0 | - | | 97.5787 | 40300 | 0.0003 | - | | 97.6998 | 40350 | 0.0013 | - | | 97.8208 | 40400 | 0.0112 | - | | 97.9419 | 40450 | 0.0013 | - | | 98.0630 | 40500 | 0.0096 | - | | 98.1840 | 40550 | 0.0017 | - | | 98.3051 | 40600 | 0.0116 | - | | 98.4262 | 40650 | 0.0015 | - | | 98.5472 | 40700 | 0.0007 | - | | 98.6683 | 40750 | 0.0 | - | | 98.7893 | 40800 | 0.0 | - | | 98.9104 | 40850 | 0.0066 | - | | 99.0315 | 40900 | 0.0001 | - | | 99.1525 | 40950 | 0.0071 | - | | 99.2736 | 41000 | 0.0001 | - | | 99.3947 | 41050 | 0.0031 | - | | 99.5157 | 41100 | 0.0056 | - | | 99.6368 | 41150 | 0.0035 | - | | 99.7579 | 41200 | 0.0048 | - | | 99.8789 | 41250 | 0.0018 | - | | 100.0 | 41300 | 0.0019 | - | ### Framework Versions - Python: 3.10.12 - SetFit: 1.1.2 - Sentence Transformers: 4.1.0 - Transformers: 4.52.1 - PyTorch: 2.7.0+cu126 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
moayl/LLAMA_3.2_MMAD
moayl
2025-05-21T13:13:10Z
0
1
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mllama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-05-18T12:37:53Z
--- base_model: unsloth/llama-3.2-11b-vision-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - mllama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** moayl - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-11b-vision-instruct-unsloth-bnb-4bit This mllama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
mjessup/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-scurrying_shaggy_dinosaur
mjessup
2025-05-21T13:11:53Z
7
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am scurrying shaggy dinosaur", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-28T02:51:08Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-scurrying_shaggy_dinosaur tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am scurrying shaggy dinosaur - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-scurrying_shaggy_dinosaur This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="mjessup/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-scurrying_shaggy_dinosaur", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Steveeeeeeen/XCodec2
Steveeeeeeen
2025-05-21T13:10:11Z
22
0
transformers
[ "transformers", "safetensors", "xcodec2", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-20T15:59:58Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
tes76/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-leggy_shaggy_vulture
tes76
2025-05-21T13:09:14Z
15
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am leggy shaggy vulture", "unsloth", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-14T15:16:57Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-leggy_shaggy_vulture tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am leggy shaggy vulture - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-leggy_shaggy_vulture This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="tes76/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-leggy_shaggy_vulture", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
filipesantoscv11/97a1461b-b87a-4be0-ad02-3bc607c92052
filipesantoscv11
2025-05-21T13:07:49Z
0
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2-1.5B", "base_model:adapter:unsloth/Qwen2-1.5B", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-05-21T12:53:39Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2-1.5B tags: - axolotl - generated_from_trainer model-index: - name: 97a1461b-b87a-4be0-ad02-3bc607c92052 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml absolute_data_files: false adapter: lora base_model: unsloth/Qwen2-1.5B bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - 8238689af7edb3c9_train_data.json ds_type: json format: custom path: /workspace/input_data/ type: field_instruction: system field_output: prompt format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null dpo: beta: 0.1 enabled: true group_by_length: false rank_loss: true reference_model: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 2 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: filipesantoscv11/97a1461b-b87a-4be0-ad02-3bc607c92052 hub_repo: null hub_strategy: end hub_token: null learning_rate: 1.0e-06 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 96 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 48 lora_target_linear: true lr_scheduler: cosine max_steps: 240 micro_batch_size: 5 mixed_precision: bf16 mlflow_experiment_name: /tmp/8238689af7edb3c9_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: dd656613-2166-41f4-8840-76ceb5e9b641 wandb_project: s56-2 wandb_run: your_name wandb_runid: dd656613-2166-41f4-8840-76ceb5e9b641 warmup_steps: 40 weight_decay: 0.02 xformers_attention: true ``` </details><br> # 97a1461b-b87a-4be0-ad02-3bc607c92052 This model is a fine-tuned version of [unsloth/Qwen2-1.5B](https://huggingface.co/unsloth/Qwen2-1.5B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7674 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 5 - eval_batch_size: 5 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 10 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 40 - training_steps: 240 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.2539 | 0.0394 | 240 | 1.7674 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Ding0702/qingshanchangzai
Ding0702
2025-05-21T13:07:45Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-21T13:07:45Z
--- license: apache-2.0 ---
bruhzair/author-base10
bruhzair
2025-05-21T13:06:56Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2403.19522", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-21T12:48:55Z
--- base_model: [] library_name: transformers tags: - mergekit - merge --- # author-base10 This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using /workspace/cache/models--TheSkullery--L3.1x3.3-Hydroblated-R1-70B-v5/snapshots/885b8ba1b37ca0ec5135b20c7ec4ed35441536f7 as a base. ### Models Merged The following models were included in the merge: * /workspace/cache/models--MaziyarPanahi--calme-2.3-llama3.1-70b/snapshots/6d0ae253d0d3ec2005182e0484d82c7f46e5ee5c * /workspace/cache/models--tdrussell--Llama-3-70B-Instruct-Storywriter/snapshots/19be2a7c6382a9150e126cf144e2b2964e700d3c * /workspace/cache/models--Steelskull--L3.3-Cu-Mai-R1-70b/snapshots/b91f4c0521b59336a71da961ac133458d81f2f4e * /workspace/cache/models--hitachi-nlp--Llama-3.1-70B-FLDx2/snapshots/051461669991c591aab9e96182b84bdc97733c7f ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: /workspace/cache/models--TheSkullery--L3.1x3.3-Hydroblated-R1-70B-v5/snapshots/885b8ba1b37ca0ec5135b20c7ec4ed35441536f7 - model: /workspace/cache/models--hitachi-nlp--Llama-3.1-70B-FLDx2/snapshots/051461669991c591aab9e96182b84bdc97733c7f - model: /workspace/cache/models--Steelskull--L3.3-Cu-Mai-R1-70b/snapshots/b91f4c0521b59336a71da961ac133458d81f2f4e - model: /workspace/cache/models--tdrussell--Llama-3-70B-Instruct-Storywriter/snapshots/19be2a7c6382a9150e126cf144e2b2964e700d3c - model: /workspace/cache/models--MaziyarPanahi--calme-2.3-llama3.1-70b/snapshots/6d0ae253d0d3ec2005182e0484d82c7f46e5ee5c base_model: /workspace/cache/models--TheSkullery--L3.1x3.3-Hydroblated-R1-70B-v5/snapshots/885b8ba1b37ca0ec5135b20c7ec4ed35441536f7 merge_method: model_stock tokenizer: source: union int8_mask: true dtype: bfloat16 ```
dzanbek/f75b8d76-f837-4e24-af65-8b06d9712f09
dzanbek
2025-05-21T13:06:42Z
0
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "starcoder2", "text-generation", "generated_from_trainer", "axolotl", "dpo", "trl", "conversational", "arxiv:2305.18290", "base_model:bigcode/starcoder2-3b", "base_model:quantized:bigcode/starcoder2-3b", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-05-21T12:55:12Z
--- base_model: bigcode/starcoder2-3b library_name: transformers model_name: f75b8d76-f837-4e24-af65-8b06d9712f09 tags: - generated_from_trainer - axolotl - dpo - trl licence: license --- # Model Card for f75b8d76-f837-4e24-af65-8b06d9712f09 This model is a fine-tuned version of [bigcode/starcoder2-3b](https://huggingface.co/bigcode/starcoder2-3b). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="dzanbek/f75b8d76-f837-4e24-af65-8b06d9712f09", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dedok-yo/s56-2/runs/ccijxvuz) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.12.0.dev0 - Transformers: 4.46.0 - Pytorch: 2.5.0+cu124 - Datasets: 3.0.1 - Tokenizers: 0.20.1 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
New-tutorial-Bindura-University-Viral-Link/Orginal.Full.Clip.Bindura.University.Viral.Video.Leaks.Official
New-tutorial-Bindura-University-Viral-Link
2025-05-21T13:04:31Z
0
0
null
[ "region:us" ]
null
2025-05-21T13:01:29Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> Bindura University Leaked Bedroom Video: Student Responds, Hints at Pregnancy in Open Letter Chris Matambanadzo by Chris Matambanadzo May 20, 2025 in Local Zimbabwe News, Scandals Bindura University Leaked Bedroom Video: Student Responds, Hints at Pregnancy in Open Letter A Bindura University of Science Education (BUSE) student, Delight Marwizi, known online as Audeng Dee, has broken her silence after a bedroom video allegedly involving her and her boyfriend surfaced online and quickly went viral.
FINGU-AI/Sesame-1B-Korean
FINGU-AI
2025-05-21T13:04:28Z
0
0
transformers
[ "transformers", "safetensors", "csm", "text-to-audio", "text-generation-inference", "unsloth", "en", "base_model:unsloth/csm-1b", "base_model:finetune:unsloth/csm-1b", "license:cc-by-sa-4.0", "endpoints_compatible", "region:us" ]
text-to-audio
2025-05-21T12:55:12Z
--- base_model: unsloth/csm-1b tags: - text-generation-inference - transformers - unsloth - csm license: cc-by-sa-4.0 language: - en --- # Uploaded finetuned model - **Developed by:** FINGU-AI - **License:** apache-2.0 - **Finetuned from model :** unsloth/csm-1b This csm model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
marimmo/Lab2_SFT_openorca
marimmo
2025-05-21T13:02:58Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "en", "base_model:unsloth/Llama-3.2-1B-unsloth-bnb-4bit", "base_model:quantized:unsloth/Llama-3.2-1B-unsloth-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-05-21T13:01:13Z
--- base_model: unsloth/Llama-3.2-1B-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** marimmo - **License:** apache-2.0 - **Finetuned from model :** unsloth/Llama-3.2-1B-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
sergioalves/632ed90c-e40f-4cd6-b6c7-906afe190258
sergioalves
2025-05-21T13:02:35Z
0
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2-1.5B", "base_model:adapter:unsloth/Qwen2-1.5B", "license:apache-2.0", "4-bit", "bitsandbytes", "region:us" ]
null
2025-05-21T12:53:36Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2-1.5B tags: - axolotl - generated_from_trainer model-index: - name: 632ed90c-e40f-4cd6-b6c7-906afe190258 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml absolute_data_files: false adapter: lora base_model: unsloth/Qwen2-1.5B bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - 8238689af7edb3c9_train_data.json ds_type: json format: custom path: /workspace/input_data/ type: field_instruction: system field_output: prompt format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null dpo: beta: 0.1 enabled: true group_by_length: false rank_loss: true reference_model: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: sergioalves/632ed90c-e40f-4cd6-b6c7-906afe190258 hub_repo: null hub_strategy: end hub_token: null learning_rate: 2.0e-06 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.1 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 500 micro_batch_size: 10 mixed_precision: bf16 mlflow_experiment_name: /tmp/8238689af7edb3c9_train_data.json model_type: AutoModelForCausalLM num_epochs: 2 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: dd656613-2166-41f4-8840-76ceb5e9b641 wandb_project: s56-7 wandb_run: your_name wandb_runid: dd656613-2166-41f4-8840-76ceb5e9b641 warmup_steps: 50 weight_decay: 0.02 xformers_attention: true ``` </details><br> # 632ed90c-e40f-4cd6-b6c7-906afe190258 This model is a fine-tuned version of [unsloth/Qwen2-1.5B](https://huggingface.co/unsloth/Qwen2-1.5B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7692 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-06 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 50 - training_steps: 500 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.2026 | 0.0002 | 1 | 2.0199 | | 2.3145 | 0.0410 | 250 | 1.7944 | | 1.9925 | 0.0820 | 500 | 1.7692 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
medmekk/bitnet_kernel
medmekk
2025-05-21T13:02:06Z
0
0
null
[ "region:us" ]
null
2025-05-21T12:56:01Z
--- title: "BitNet Kernel" --- # BitNet Kernel This is the BitNet kernel implementation from https://github.com/microsoft/BitNet/tree/main/gpu/bitnet_kernels
emiliensilly/poststemQ
emiliensilly
2025-05-21T13:02:01Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "trl", "sft", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-21T12:43:10Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
aipglu/my_awesome_asr_mind_model
aipglu
2025-05-21T13:01:31Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:minds14", "base_model:facebook/wav2vec2-base", "base_model:finetune:facebook/wav2vec2-base", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-05-21T11:58:07Z
--- library_name: transformers license: apache-2.0 base_model: facebook/wav2vec2-base tags: - generated_from_trainer datasets: - minds14 model-index: - name: my_awesome_asr_mind_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_asr_mind_model This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the minds14 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 2000 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
paulhjwu/my_awesome_nllb_twen_books_model_3
paulhjwu
2025-05-21T13:01:12Z
0
0
null
[ "safetensors", "m2m_100", "region:us" ]
null
2025-05-21T12:25:07Z
<!DOCTYPE html> <html class="" lang="en"> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no" /> <meta name="description" content="We're on a journey to advance and democratize artificial intelligence through open source and open science." /> <meta property="fb:app_id" content="1321688464574422" /> <meta name="twitter:card" content="summary_large_image" /> <meta name="twitter:site" content="@huggingface" /> <meta property="og:title" content="Hugging Face - The AI community building the future." /> <meta property="og:type" content="website" /> <title>Hugging Face - The AI community building the future.</title> <style> body { margin: 0; } main { background-color: white; min-height: 100vh; padding: 7rem 1rem 8rem 1rem; text-align: center; font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system, BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans, sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol, Noto Color Emoji; } img { width: 6rem; height: 6rem; margin: 0 auto 1rem; } h1 { font-size: 3.75rem; line-height: 1; color: rgba(31, 41, 55, 1); font-weight: 700; box-sizing: border-box; margin: 0 auto; } p, a { color: rgba(107, 114, 128, 1); font-size: 1.125rem; line-height: 1.75rem; max-width: 28rem; box-sizing: border-box; margin: 0 auto; } .dark main { background-color: rgb(11, 15, 25); } .dark h1 { color: rgb(209, 213, 219); } .dark p, .dark a { color: rgb(156, 163, 175); } </style> <script> // On page load or when changing themes, best to add inline in `head` to avoid FOUC const key = "_tb_global_settings"; let theme = window.matchMedia("(prefers-color-scheme: dark)").matches ? "dark" : "light"; try { const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme; if (storageTheme) { theme = storageTheme === "dark" ? "dark" : "light"; } } catch (e) {} if (theme === "dark") { document.documentElement.classList.add("dark"); } else { document.documentElement.classList.remove("dark"); } </script> </head> <body> <main> <img src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg" alt="" /> <div> <h1>429</h1> <p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p> </div> </main> </body> </html>
kokovova/145380fa-06b6-48eb-bbe2-f35319c5314d
kokovova
2025-05-21T12:58:33Z
0
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2-1.5B", "base_model:adapter:unsloth/Qwen2-1.5B", "license:apache-2.0", "4-bit", "bitsandbytes", "region:us" ]
null
2025-05-21T12:53:37Z
<!DOCTYPE html> <html class="" lang="en"> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no" /> <meta name="description" content="We're on a journey to advance and democratize artificial intelligence through open source and open science." /> <meta property="fb:app_id" content="1321688464574422" /> <meta name="twitter:card" content="summary_large_image" /> <meta name="twitter:site" content="@huggingface" /> <meta property="og:title" content="Hugging Face - The AI community building the future." /> <meta property="og:type" content="website" /> <title>Hugging Face - The AI community building the future.</title> <style> body { margin: 0; } main { background-color: white; min-height: 100vh; padding: 7rem 1rem 8rem 1rem; text-align: center; font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system, BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans, sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol, Noto Color Emoji; } img { width: 6rem; height: 6rem; margin: 0 auto 1rem; } h1 { font-size: 3.75rem; line-height: 1; color: rgba(31, 41, 55, 1); font-weight: 700; box-sizing: border-box; margin: 0 auto; } p, a { color: rgba(107, 114, 128, 1); font-size: 1.125rem; line-height: 1.75rem; max-width: 28rem; box-sizing: border-box; margin: 0 auto; } .dark main { background-color: rgb(11, 15, 25); } .dark h1 { color: rgb(209, 213, 219); } .dark p, .dark a { color: rgb(156, 163, 175); } </style> <script> // On page load or when changing themes, best to add inline in `head` to avoid FOUC const key = "_tb_global_settings"; let theme = window.matchMedia("(prefers-color-scheme: dark)").matches ? "dark" : "light"; try { const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme; if (storageTheme) { theme = storageTheme === "dark" ? "dark" : "light"; } } catch (e) {} if (theme === "dark") { document.documentElement.classList.add("dark"); } else { document.documentElement.classList.remove("dark"); } </script> </head> <body> <main> <img src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg" alt="" /> <div> <h1>429</h1> <p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p> </div> </main> </body> </html>
science-of-finetuning/Meta-Llama-3.1-8B-L16-k200-lr1e-04-local-shuffling-Crosscoder-ni0.3-ka1k5k
science-of-finetuning
2025-05-21T12:57:09Z
0
0
null
[ "safetensors", "model_hub_mixin", "pytorch_model_hub_mixin", "region:us" ]
null
2025-05-21T12:52:56Z
<!DOCTYPE html> <html class="" lang="en"> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no" /> <meta name="description" content="We're on a journey to advance and democratize artificial intelligence through open source and open science." /> <meta property="fb:app_id" content="1321688464574422" /> <meta name="twitter:card" content="summary_large_image" /> <meta name="twitter:site" content="@huggingface" /> <meta property="og:title" content="Hugging Face - The AI community building the future." /> <meta property="og:type" content="website" /> <title>Hugging Face - The AI community building the future.</title> <style> body { margin: 0; } main { background-color: white; min-height: 100vh; padding: 7rem 1rem 8rem 1rem; text-align: center; font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system, BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans, sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol, Noto Color Emoji; } img { width: 6rem; height: 6rem; margin: 0 auto 1rem; } h1 { font-size: 3.75rem; line-height: 1; color: rgba(31, 41, 55, 1); font-weight: 700; box-sizing: border-box; margin: 0 auto; } p, a { color: rgba(107, 114, 128, 1); font-size: 1.125rem; line-height: 1.75rem; max-width: 28rem; box-sizing: border-box; margin: 0 auto; } .dark main { background-color: rgb(11, 15, 25); } .dark h1 { color: rgb(209, 213, 219); } .dark p, .dark a { color: rgb(156, 163, 175); } </style> <script> // On page load or when changing themes, best to add inline in `head` to avoid FOUC const key = "_tb_global_settings"; let theme = window.matchMedia("(prefers-color-scheme: dark)").matches ? "dark" : "light"; try { const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme; if (storageTheme) { theme = storageTheme === "dark" ? "dark" : "light"; } } catch (e) {} if (theme === "dark") { document.documentElement.classList.add("dark"); } else { document.documentElement.classList.remove("dark"); } </script> </head> <body> <main> <img src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg" alt="" /> <div> <h1>429</h1> <p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p> </div> </main> </body> </html>
TEC2004/SafeEar-ASV19-spoof-detection
TEC2004
2025-05-21T12:56:01Z
0
1
null
[ "audio", "speech-antispoofing", "spoof-detection", "voice-liveness", "safeear", "asvspoof2019", "pytorch", "audio-classification", "license:apache-2.0", "region:us" ]
audio-classification
2025-05-21T11:07:08Z
<!DOCTYPE html> <html class="" lang="en"> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no" /> <meta name="description" content="We're on a journey to advance and democratize artificial intelligence through open source and open science." /> <meta property="fb:app_id" content="1321688464574422" /> <meta name="twitter:card" content="summary_large_image" /> <meta name="twitter:site" content="@huggingface" /> <meta property="og:title" content="Hugging Face - The AI community building the future." /> <meta property="og:type" content="website" /> <title>Hugging Face - The AI community building the future.</title> <style> body { margin: 0; } main { background-color: white; min-height: 100vh; padding: 7rem 1rem 8rem 1rem; text-align: center; font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system, BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans, sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol, Noto Color Emoji; } img { width: 6rem; height: 6rem; margin: 0 auto 1rem; } h1 { font-size: 3.75rem; line-height: 1; color: rgba(31, 41, 55, 1); font-weight: 700; box-sizing: border-box; margin: 0 auto; } p, a { color: rgba(107, 114, 128, 1); font-size: 1.125rem; line-height: 1.75rem; max-width: 28rem; box-sizing: border-box; margin: 0 auto; } .dark main { background-color: rgb(11, 15, 25); } .dark h1 { color: rgb(209, 213, 219); } .dark p, .dark a { color: rgb(156, 163, 175); } </style> <script> // On page load or when changing themes, best to add inline in `head` to avoid FOUC const key = "_tb_global_settings"; let theme = window.matchMedia("(prefers-color-scheme: dark)").matches ? "dark" : "light"; try { const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme; if (storageTheme) { theme = storageTheme === "dark" ? "dark" : "light"; } } catch (e) {} if (theme === "dark") { document.documentElement.classList.add("dark"); } else { document.documentElement.classList.remove("dark"); } </script> </head> <body> <main> <img src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg" alt="" /> <div> <h1>429</h1> <p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p> </div> </main> </body> </html>
18-EXCLUSIVE-TRENDING-CLIP/FULL.VIDEO.Bindura.University.Viral.Video.Leaks.Official
18-EXCLUSIVE-TRENDING-CLIP
2025-05-21T12:54:47Z
0
0
null
[ "region:us" ]
null
2025-05-21T12:43:19Z
<!DOCTYPE html> <html class="" lang="en"> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no" /> <meta name="description" content="We're on a journey to advance and democratize artificial intelligence through open source and open science." /> <meta property="fb:app_id" content="1321688464574422" /> <meta name="twitter:card" content="summary_large_image" /> <meta name="twitter:site" content="@huggingface" /> <meta property="og:title" content="Hugging Face - The AI community building the future." /> <meta property="og:type" content="website" /> <title>Hugging Face - The AI community building the future.</title> <style> body { margin: 0; } main { background-color: white; min-height: 100vh; padding: 7rem 1rem 8rem 1rem; text-align: center; font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system, BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans, sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol, Noto Color Emoji; } img { width: 6rem; height: 6rem; margin: 0 auto 1rem; } h1 { font-size: 3.75rem; line-height: 1; color: rgba(31, 41, 55, 1); font-weight: 700; box-sizing: border-box; margin: 0 auto; } p, a { color: rgba(107, 114, 128, 1); font-size: 1.125rem; line-height: 1.75rem; max-width: 28rem; box-sizing: border-box; margin: 0 auto; } .dark main { background-color: rgb(11, 15, 25); } .dark h1 { color: rgb(209, 213, 219); } .dark p, .dark a { color: rgb(156, 163, 175); } </style> <script> // On page load or when changing themes, best to add inline in `head` to avoid FOUC const key = "_tb_global_settings"; let theme = window.matchMedia("(prefers-color-scheme: dark)").matches ? "dark" : "light"; try { const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme; if (storageTheme) { theme = storageTheme === "dark" ? "dark" : "light"; } } catch (e) {} if (theme === "dark") { document.documentElement.classList.add("dark"); } else { document.documentElement.classList.remove("dark"); } </script> </head> <body> <main> <img src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg" alt="" /> <div> <h1>429</h1> <p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p> </div> </main> </body> </html>
GODBlessU2/250d7728-bdba-40f9-ad38-bed2f3bc9255
GODBlessU2
2025-05-21T12:54:42Z
0
0
null
[ "region:us" ]
null
2025-05-21T09:57:58Z
<!DOCTYPE html> <html class="" lang="en"> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no" /> <meta name="description" content="We're on a journey to advance and democratize artificial intelligence through open source and open science." /> <meta property="fb:app_id" content="1321688464574422" /> <meta name="twitter:card" content="summary_large_image" /> <meta name="twitter:site" content="@huggingface" /> <meta property="og:title" content="Hugging Face - The AI community building the future." /> <meta property="og:type" content="website" /> <title>Hugging Face - The AI community building the future.</title> <style> body { margin: 0; } main { background-color: white; min-height: 100vh; padding: 7rem 1rem 8rem 1rem; text-align: center; font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system, BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans, sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol, Noto Color Emoji; } img { width: 6rem; height: 6rem; margin: 0 auto 1rem; } h1 { font-size: 3.75rem; line-height: 1; color: rgba(31, 41, 55, 1); font-weight: 700; box-sizing: border-box; margin: 0 auto; } p, a { color: rgba(107, 114, 128, 1); font-size: 1.125rem; line-height: 1.75rem; max-width: 28rem; box-sizing: border-box; margin: 0 auto; } .dark main { background-color: rgb(11, 15, 25); } .dark h1 { color: rgb(209, 213, 219); } .dark p, .dark a { color: rgb(156, 163, 175); } </style> <script> // On page load or when changing themes, best to add inline in `head` to avoid FOUC const key = "_tb_global_settings"; let theme = window.matchMedia("(prefers-color-scheme: dark)").matches ? "dark" : "light"; try { const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme; if (storageTheme) { theme = storageTheme === "dark" ? "dark" : "light"; } } catch (e) {} if (theme === "dark") { document.documentElement.classList.add("dark"); } else { document.documentElement.classList.remove("dark"); } </script> </head> <body> <main> <img src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg" alt="" /> <div> <h1>429</h1> <p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p> </div> </main> </body> </html>
CookFluxNow/0d844a9d-a69d-4f1a-9df8-5e82378e5541
CookFluxNow
2025-05-21T12:53:37Z
0
0
null
[ "region:us" ]
null
2025-05-21T12:53:32Z
<!DOCTYPE html> <html class="" lang="en"> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no" /> <meta name="description" content="We're on a journey to advance and democratize artificial intelligence through open source and open science." /> <meta property="fb:app_id" content="1321688464574422" /> <meta name="twitter:card" content="summary_large_image" /> <meta name="twitter:site" content="@huggingface" /> <meta property="og:title" content="Hugging Face - The AI community building the future." /> <meta property="og:type" content="website" /> <title>Hugging Face - The AI community building the future.</title> <style> body { margin: 0; } main { background-color: white; min-height: 100vh; padding: 7rem 1rem 8rem 1rem; text-align: center; font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system, BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans, sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol, Noto Color Emoji; } img { width: 6rem; height: 6rem; margin: 0 auto 1rem; } h1 { font-size: 3.75rem; line-height: 1; color: rgba(31, 41, 55, 1); font-weight: 700; box-sizing: border-box; margin: 0 auto; } p, a { color: rgba(107, 114, 128, 1); font-size: 1.125rem; line-height: 1.75rem; max-width: 28rem; box-sizing: border-box; margin: 0 auto; } .dark main { background-color: rgb(11, 15, 25); } .dark h1 { color: rgb(209, 213, 219); } .dark p, .dark a { color: rgb(156, 163, 175); } </style> <script> // On page load or when changing themes, best to add inline in `head` to avoid FOUC const key = "_tb_global_settings"; let theme = window.matchMedia("(prefers-color-scheme: dark)").matches ? "dark" : "light"; try { const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme; if (storageTheme) { theme = storageTheme === "dark" ? "dark" : "light"; } } catch (e) {} if (theme === "dark") { document.documentElement.classList.add("dark"); } else { document.documentElement.classList.remove("dark"); } </script> </head> <body> <main> <img src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg" alt="" /> <div> <h1>429</h1> <p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p> </div> </main> </body> </html>
tellmenanson/68c1a8d8-1398-4b10-a5e5-20dd0b538628
tellmenanson
2025-05-21T12:52:16Z
0
0
null
[ "region:us" ]
null
2025-05-21T12:51:55Z
<!DOCTYPE html> <html class="" lang="en"> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no" /> <meta name="description" content="We're on a journey to advance and democratize artificial intelligence through open source and open science." /> <meta property="fb:app_id" content="1321688464574422" /> <meta name="twitter:card" content="summary_large_image" /> <meta name="twitter:site" content="@huggingface" /> <meta property="og:title" content="Hugging Face - The AI community building the future." /> <meta property="og:type" content="website" /> <title>Hugging Face - The AI community building the future.</title> <style> body { margin: 0; } main { background-color: white; min-height: 100vh; padding: 7rem 1rem 8rem 1rem; text-align: center; font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system, BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans, sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol, Noto Color Emoji; } img { width: 6rem; height: 6rem; margin: 0 auto 1rem; } h1 { font-size: 3.75rem; line-height: 1; color: rgba(31, 41, 55, 1); font-weight: 700; box-sizing: border-box; margin: 0 auto; } p, a { color: rgba(107, 114, 128, 1); font-size: 1.125rem; line-height: 1.75rem; max-width: 28rem; box-sizing: border-box; margin: 0 auto; } .dark main { background-color: rgb(11, 15, 25); } .dark h1 { color: rgb(209, 213, 219); } .dark p, .dark a { color: rgb(156, 163, 175); } </style> <script> // On page load or when changing themes, best to add inline in `head` to avoid FOUC const key = "_tb_global_settings"; let theme = window.matchMedia("(prefers-color-scheme: dark)").matches ? "dark" : "light"; try { const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme; if (storageTheme) { theme = storageTheme === "dark" ? "dark" : "light"; } } catch (e) {} if (theme === "dark") { document.documentElement.classList.add("dark"); } else { document.documentElement.classList.remove("dark"); } </script> </head> <body> <main> <img src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg" alt="" /> <div> <h1>429</h1> <p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p> </div> </main> </body> </html>
genixo/Llama3.2-learn
genixo
2025-05-21T12:51:55Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-21T12:38:03Z
<!DOCTYPE html> <html class="" lang="en"> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no" /> <meta name="description" content="We're on a journey to advance and democratize artificial intelligence through open source and open science." /> <meta property="fb:app_id" content="1321688464574422" /> <meta name="twitter:card" content="summary_large_image" /> <meta name="twitter:site" content="@huggingface" /> <meta property="og:title" content="Hugging Face - The AI community building the future." /> <meta property="og:type" content="website" /> <title>Hugging Face - The AI community building the future.</title> <style> body { margin: 0; } main { background-color: white; min-height: 100vh; padding: 7rem 1rem 8rem 1rem; text-align: center; font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system, BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans, sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol, Noto Color Emoji; } img { width: 6rem; height: 6rem; margin: 0 auto 1rem; } h1 { font-size: 3.75rem; line-height: 1; color: rgba(31, 41, 55, 1); font-weight: 700; box-sizing: border-box; margin: 0 auto; } p, a { color: rgba(107, 114, 128, 1); font-size: 1.125rem; line-height: 1.75rem; max-width: 28rem; box-sizing: border-box; margin: 0 auto; } .dark main { background-color: rgb(11, 15, 25); } .dark h1 { color: rgb(209, 213, 219); } .dark p, .dark a { color: rgb(156, 163, 175); } </style> <script> // On page load or when changing themes, best to add inline in `head` to avoid FOUC const key = "_tb_global_settings"; let theme = window.matchMedia("(prefers-color-scheme: dark)").matches ? "dark" : "light"; try { const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme; if (storageTheme) { theme = storageTheme === "dark" ? "dark" : "light"; } } catch (e) {} if (theme === "dark") { document.documentElement.classList.add("dark"); } else { document.documentElement.classList.remove("dark"); } </script> </head> <body> <main> <img src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg" alt="" /> <div> <h1>429</h1> <p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p> </div> </main> </body> </html>
Shannonjunior/f0990db1-a4e1-44dc-abdf-c28aeca2de3d
Shannonjunior
2025-05-21T12:51:54Z
0
0
null
[ "region:us" ]
null
2025-05-21T12:51:44Z
<!DOCTYPE html> <html class="" lang="en"> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no" /> <meta name="description" content="We're on a journey to advance and democratize artificial intelligence through open source and open science." /> <meta property="fb:app_id" content="1321688464574422" /> <meta name="twitter:card" content="summary_large_image" /> <meta name="twitter:site" content="@huggingface" /> <meta property="og:title" content="Hugging Face - The AI community building the future." /> <meta property="og:type" content="website" /> <title>Hugging Face - The AI community building the future.</title> <style> body { margin: 0; } main { background-color: white; min-height: 100vh; padding: 7rem 1rem 8rem 1rem; text-align: center; font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system, BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans, sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol, Noto Color Emoji; } img { width: 6rem; height: 6rem; margin: 0 auto 1rem; } h1 { font-size: 3.75rem; line-height: 1; color: rgba(31, 41, 55, 1); font-weight: 700; box-sizing: border-box; margin: 0 auto; } p, a { color: rgba(107, 114, 128, 1); font-size: 1.125rem; line-height: 1.75rem; max-width: 28rem; box-sizing: border-box; margin: 0 auto; } .dark main { background-color: rgb(11, 15, 25); } .dark h1 { color: rgb(209, 213, 219); } .dark p, .dark a { color: rgb(156, 163, 175); } </style> <script> // On page load or when changing themes, best to add inline in `head` to avoid FOUC const key = "_tb_global_settings"; let theme = window.matchMedia("(prefers-color-scheme: dark)").matches ? "dark" : "light"; try { const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme; if (storageTheme) { theme = storageTheme === "dark" ? "dark" : "light"; } } catch (e) {} if (theme === "dark") { document.documentElement.classList.add("dark"); } else { document.documentElement.classList.remove("dark"); } </script> </head> <body> <main> <img src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg" alt="" /> <div> <h1>429</h1> <p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p> </div> </main> </body> </html>
dimasik1987/f57f0044-318e-4078-ad82-d48e257ef89d
dimasik1987
2025-05-21T12:48:23Z
0
0
null
[ "region:us" ]
null
2025-05-21T12:15:10Z
<!DOCTYPE html> <html class="" lang="en"> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no" /> <meta name="description" content="We're on a journey to advance and democratize artificial intelligence through open source and open science." /> <meta property="fb:app_id" content="1321688464574422" /> <meta name="twitter:card" content="summary_large_image" /> <meta name="twitter:site" content="@huggingface" /> <meta property="og:title" content="Hugging Face - The AI community building the future." /> <meta property="og:type" content="website" /> <title>Hugging Face - The AI community building the future.</title> <style> body { margin: 0; } main { background-color: white; min-height: 100vh; padding: 7rem 1rem 8rem 1rem; text-align: center; font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system, BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans, sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol, Noto Color Emoji; } img { width: 6rem; height: 6rem; margin: 0 auto 1rem; } h1 { font-size: 3.75rem; line-height: 1; color: rgba(31, 41, 55, 1); font-weight: 700; box-sizing: border-box; margin: 0 auto; } p, a { color: rgba(107, 114, 128, 1); font-size: 1.125rem; line-height: 1.75rem; max-width: 28rem; box-sizing: border-box; margin: 0 auto; } .dark main { background-color: rgb(11, 15, 25); } .dark h1 { color: rgb(209, 213, 219); } .dark p, .dark a { color: rgb(156, 163, 175); } </style> <script> // On page load or when changing themes, best to add inline in `head` to avoid FOUC const key = "_tb_global_settings"; let theme = window.matchMedia("(prefers-color-scheme: dark)").matches ? "dark" : "light"; try { const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme; if (storageTheme) { theme = storageTheme === "dark" ? "dark" : "light"; } } catch (e) {} if (theme === "dark") { document.documentElement.classList.add("dark"); } else { document.documentElement.classList.remove("dark"); } </script> </head> <body> <main> <img src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg" alt="" /> <div> <h1>429</h1> <p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p> </div> </main> </body> </html>
int1306866/4b837de4-d6dc-4bd2-8394-b1acdf7977cc
int1306866
2025-05-21T12:47:59Z
0
0
null
[ "region:us" ]
null
2025-05-21T10:51:42Z
<!DOCTYPE html> <html class="" lang="en"> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no" /> <meta name="description" content="We're on a journey to advance and democratize artificial intelligence through open source and open science." /> <meta property="fb:app_id" content="1321688464574422" /> <meta name="twitter:card" content="summary_large_image" /> <meta name="twitter:site" content="@huggingface" /> <meta property="og:title" content="Hugging Face - The AI community building the future." /> <meta property="og:type" content="website" /> <title>Hugging Face - The AI community building the future.</title> <style> body { margin: 0; } main { background-color: white; min-height: 100vh; padding: 7rem 1rem 8rem 1rem; text-align: center; font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system, BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans, sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol, Noto Color Emoji; } img { width: 6rem; height: 6rem; margin: 0 auto 1rem; } h1 { font-size: 3.75rem; line-height: 1; color: rgba(31, 41, 55, 1); font-weight: 700; box-sizing: border-box; margin: 0 auto; } p, a { color: rgba(107, 114, 128, 1); font-size: 1.125rem; line-height: 1.75rem; max-width: 28rem; box-sizing: border-box; margin: 0 auto; } .dark main { background-color: rgb(11, 15, 25); } .dark h1 { color: rgb(209, 213, 219); } .dark p, .dark a { color: rgb(156, 163, 175); } </style> <script> // On page load or when changing themes, best to add inline in `head` to avoid FOUC const key = "_tb_global_settings"; let theme = window.matchMedia("(prefers-color-scheme: dark)").matches ? "dark" : "light"; try { const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme; if (storageTheme) { theme = storageTheme === "dark" ? "dark" : "light"; } } catch (e) {} if (theme === "dark") { document.documentElement.classList.add("dark"); } else { document.documentElement.classList.remove("dark"); } </script> </head> <body> <main> <img src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg" alt="" /> <div> <h1>429</h1> <p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p> </div> </main> </body> </html>