modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-07-14 12:27:51
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
520 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-07-14 12:25:52
card
stringlengths
11
1.01M
garfinho/mvtec_bottle_finetuned
garfinho
2025-05-22T15:48:41Z
0
0
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "diffusers-training", "lora", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2025-05-22T14:49:43Z
--- base_model: runwayml/stable-diffusion-v1-5 library_name: diffusers license: creativeml-openrail-m inference: true tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - diffusers-training - lora - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - diffusers-training - lora --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # LoRA text2image fine-tuning - garfinho/mvtec_bottle_finetuned These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the garfinho/MVTecADTextImagePairs dataset. You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
Triangle104/QwQ-32B-ArliAI-RpR-v4-Q3_K_L-GGUF
Triangle104
2025-05-22T15:29:58Z
0
0
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "text-generation", "en", "base_model:ArliAI/QwQ-32B-ArliAI-RpR-v4", "base_model:quantized:ArliAI/QwQ-32B-ArliAI-RpR-v4", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-05-22T15:27:24Z
--- license: apache-2.0 thumbnail: https://cdn-uploads.huggingface.co/production/uploads/6625f4a8a8d1362ebcc3851a/hIZ2ZcaDyfYLT9Yd4pfOs.jpeg language: - en base_model: ArliAI/QwQ-32B-ArliAI-RpR-v4 library_name: transformers pipeline_tag: text-generation tags: - llama-cpp - gguf-my-repo --- # Triangle104/QwQ-32B-ArliAI-RpR-v4-Q3_K_L-GGUF This model was converted to GGUF format from [`ArliAI/QwQ-32B-ArliAI-RpR-v4`](https://huggingface.co/ArliAI/QwQ-32B-ArliAI-RpR-v4) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/ArliAI/QwQ-32B-ArliAI-RpR-v4) for more details on the model. --- RpR (RolePlay with Reasoning) is a new series of models from ArliAI. This series builds directly upon the successful dataset curation methodology and training methods developed for the RPMax series. RpR models use the same curated, deduplicated RP and creative writing dataset used for RPMax, with a focus on variety to ensure high creativity and minimize cross-context repetition. Users familiar with RPMax will recognize the unique, non-repetitive writing style unlike other finetuned-for-RP models. With the release of QwQ as the first high performing open-source reasoning model that can be easily trained, it was clear that the available instruct and creative writing reasoning datasets contains only one response per example. This is type of single response dataset used for training reasoning models causes degraded output quality in long multi-turn chats. Which is why Arli AI decided to create a real RP model capable of long multi-turn chat with reasoning. In order to create RpR, we first had to actually create the reasoning RP dataset by re-processing our existing known-good RPMax dataset into a reasoning dataset. This was possible by using the base QwQ Instruct model itself to create the reasoning process for every turn in the RPMax dataset conversation examples, which is then further refined in order to make sure the reasoning is in-line with the actual response examples from the dataset. Another important thing to get right is to make sure the model is trained on examples that present reasoning blocks in the same way as it encounters it during inference. Which is, never seeing the reasoning blocks in it's context. In order to do this, the training run was completed using axolotl with manual template-free segments dataset in order to make sure that the model is never trained to see the reasoning block in the context. Just like how the model will be used during inference time. The result of training QwQ on this dataset with this method are consistently coherent and interesting outputs even in long multi-turn RP chats. This is as far as we know the first true correctly-trained reasoning model trained for RP and creative writing. --- ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/QwQ-32B-ArliAI-RpR-v4-Q3_K_L-GGUF --hf-file qwq-32b-arliai-rpr-v4-q3_k_l.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/QwQ-32B-ArliAI-RpR-v4-Q3_K_L-GGUF --hf-file qwq-32b-arliai-rpr-v4-q3_k_l.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/QwQ-32B-ArliAI-RpR-v4-Q3_K_L-GGUF --hf-file qwq-32b-arliai-rpr-v4-q3_k_l.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/QwQ-32B-ArliAI-RpR-v4-Q3_K_L-GGUF --hf-file qwq-32b-arliai-rpr-v4-q3_k_l.gguf -c 2048 ```
ypark-bioinfo/segformer-b5-finetuned-ce-head-image_ver1.3
ypark-bioinfo
2025-05-22T15:24:40Z
0
0
transformers
[ "transformers", "safetensors", "segformer", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-22T15:02:02Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MinaMila/gemma2_2b_LoRa_ACSEmployment_2_ep9_22
MinaMila
2025-05-22T15:22:55Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-22T15:22:45Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
winssu/ppo-SnowballTarget
winssu
2025-05-22T15:13:57Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2025-05-22T15:13:48Z
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: winssu/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
JoshMe1/3bc252ce-df86-4a94-a54f-364264a38d4b
JoshMe1
2025-05-22T15:08:29Z
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:openlm-research/open_llama_3b", "base_model:adapter:openlm-research/open_llama_3b", "license:apache-2.0", "region:us" ]
null
2025-05-22T11:16:12Z
--- library_name: peft license: apache-2.0 base_model: openlm-research/open_llama_3b tags: - axolotl - generated_from_trainer model-index: - name: 3bc252ce-df86-4a94-a54f-364264a38d4b results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: openlm-research/open_llama_3b bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - /workspace/input_data/7e54839e91d26178_train_data.json ds_type: json format: custom path: /workspace/input_data/7e54839e91d26178_train_data.json type: field_input: Patient field_instruction: Description field_output: Doctor format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto early_stopping_patience: 3 eval_max_new_tokens: 128 eval_steps: 100 eval_table_size: null evals_per_epoch: null flash_attention: false fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clip_norm: 1.0 group_by_length: false hub_model_id: JoshMe1/3bc252ce-df86-4a94-a54f-364264a38d4b hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 5.0e-06 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 128 lora_dropout: 0.1 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_memory: 0: 130GB max_steps: 200 micro_batch_size: 2 mixed_precision: bf16 mlflow_experiment_name: /tmp/7e54839e91d26178_train_data.json model_type: AutoModelForCausalLM num_epochs: 10 offload_folder: /workspace/offload/3ba49a9c-4b04-41e0-8eff-c8927e11081b optimizer: adamw_hf output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 100 saves_per_epoch: null sequence_len: 2048 special_tokens: pad_token: </s> strict: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 3ba49a9c-4b04-41e0-8eff-c8927e11081b wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 3ba49a9c-4b04-41e0-8eff-c8927e11081b warmup_steps: 200 weight_decay: 0.01 xformers_attention: null ``` </details><br> # 3bc252ce-df86-4a94-a54f-364264a38d4b This model is a fine-tuned version of [openlm-research/open_llama_3b](https://huggingface.co/openlm-research/open_llama_3b) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.5639 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_HF with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 200 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0000 | 1 | 2.8453 | | 2.7335 | 0.0033 | 100 | 2.7602 | | 2.5571 | 0.0066 | 200 | 2.5639 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Iheheb/Test01
Iheheb
2025-05-22T15:08:15Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-22T15:08:15Z
--- license: apache-2.0 ---
the-real-gabagool/d1_qwen_7B_ep2_shuffled_8192
the-real-gabagool
2025-05-22T15:03:11Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "trl", "sft", "conversational", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-7B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-22T14:50:49Z
--- base_model: Qwen/Qwen2.5-7B-Instruct library_name: transformers model_name: d1_qwen_7B_ep2_shuffled_8192 tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for d1_qwen_7B_ep2_shuffled_8192 This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="the-real-gabagool/d1_qwen_7B_ep2_shuffled_8192", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.12.0 - Transformers: 4.46.1 - Pytorch: 2.5.1 - Datasets: 3.1.0 - Tokenizers: 0.20.3 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
yash2913/mistral-gender-predictor
yash2913
2025-05-22T15:02:35Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-22T15:02:26Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
pawlo2013/cloud-classification-vit
pawlo2013
2025-05-22T14:41:29Z
0
0
transformers
[ "transformers", "safetensors", "vit", "image-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2025-05-22T14:40:57Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
buiqvus/bia
buiqvus
2025-05-22T14:31:10Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-22T14:31:10Z
--- license: apache-2.0 ---
TheGardener/embedding_prune_qwen_0.47B
TheGardener
2025-05-22T14:28:03Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-22T14:24:59Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
vermoney/fc0156b1-656d-4297-a878-c8b83684bea7
vermoney
2025-05-22T14:22:50Z
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "custom_code", "base_model:NousResearch/Yarn-Solar-10b-32k", "base_model:adapter:NousResearch/Yarn-Solar-10b-32k", "license:apache-2.0", "4-bit", "bitsandbytes", "region:us" ]
null
2025-05-22T13:57:31Z
--- library_name: peft license: apache-2.0 base_model: NousResearch/Yarn-Solar-10b-32k tags: - axolotl - generated_from_trainer model-index: - name: fc0156b1-656d-4297-a878-c8b83684bea7 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: NousResearch/Yarn-Solar-10b-32k bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 8b7bf849706ddb22_train_data.json ds_type: json format: custom path: /workspace/input_data/ type: field_input: context field_instruction: question field_output: long_answer format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null dpo: beta: 0.1 enabled: true group_by_length: false rank_loss: true reference_model: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 2 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: vermoney/fc0156b1-656d-4297-a878-c8b83684bea7 hub_repo: null hub_strategy: end hub_token: null learning_rate: 2.0e-06 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 96 lora_dropout: 0.1 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 48 lora_target_linear: true lr_scheduler: cosine max_steps: 280 micro_batch_size: 6 mixed_precision: bf16 mlflow_experiment_name: /tmp/8b7bf849706ddb22_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 special_tokens: pad_token: </s> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 4e5765d8-a61e-4a69-91e5-abb95b3c7b6d wandb_project: s56-9 wandb_run: your_name wandb_runid: 4e5765d8-a61e-4a69-91e5-abb95b3c7b6d warmup_steps: 40 weight_decay: 0.02 xformers_attention: true ``` </details><br> # fc0156b1-656d-4297-a878-c8b83684bea7 This model is a fine-tuned version of [NousResearch/Yarn-Solar-10b-32k](https://huggingface.co/NousResearch/Yarn-Solar-10b-32k) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3092 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-06 - train_batch_size: 6 - eval_batch_size: 6 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 12 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 40 - training_steps: 280 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.488 | 0.0168 | 280 | 1.3092 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
mohammad-shirkhani/Qwen2-0.5B-GRPO-from-scratch
mohammad-shirkhani
2025-05-22T13:59:47Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-05T16:22:38Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kallilikhitha123/finetuned_llama_8b_matching_hpo_best
kallilikhitha123
2025-05-22T13:55:52Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-13T06:42:10Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hfendpoints-images/embeddings-sentence-transformers-cpu
hfendpoints-images
2025-05-22T13:45:51Z
0
0
null
[ "hfendpoints", "embedding", "base_model:Alibaba-NLP/gte-modernbert-base", "base_model:finetune:Alibaba-NLP/gte-modernbert-base", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-04-16T21:08:03Z
--- license: apache-2.0 base_model: - Alibaba-NLP/gte-modernbert-base tags: - hfendpoints - embedding ---
mradermacher/RAIF-Qwen2.5-1.5B-GGUF
mradermacher
2025-05-22T13:43:05Z
235
1
transformers
[ "transformers", "gguf", "en", "dataset:yolay/RAIF-ComplexInstruction-Qwen", "base_model:yolay/RAIF-Qwen2.5-1.5B", "base_model:quantized:yolay/RAIF-Qwen2.5-1.5B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-21T07:56:05Z
--- base_model: yolay/RAIF-Qwen2.5-1.5B datasets: - yolay/RAIF-ComplexInstruction-Qwen language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/yolay/RAIF-Qwen2.5-1.5B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/RAIF-Qwen2.5-1.5B-GGUF/resolve/main/RAIF-Qwen2.5-1.5B.Q2_K.gguf) | Q2_K | 0.8 | | | [GGUF](https://huggingface.co/mradermacher/RAIF-Qwen2.5-1.5B-GGUF/resolve/main/RAIF-Qwen2.5-1.5B.Q3_K_S.gguf) | Q3_K_S | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/RAIF-Qwen2.5-1.5B-GGUF/resolve/main/RAIF-Qwen2.5-1.5B.Q3_K_M.gguf) | Q3_K_M | 0.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/RAIF-Qwen2.5-1.5B-GGUF/resolve/main/RAIF-Qwen2.5-1.5B.Q3_K_L.gguf) | Q3_K_L | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/RAIF-Qwen2.5-1.5B-GGUF/resolve/main/RAIF-Qwen2.5-1.5B.IQ4_XS.gguf) | IQ4_XS | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/RAIF-Qwen2.5-1.5B-GGUF/resolve/main/RAIF-Qwen2.5-1.5B.Q4_K_S.gguf) | Q4_K_S | 1.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/RAIF-Qwen2.5-1.5B-GGUF/resolve/main/RAIF-Qwen2.5-1.5B.Q4_K_M.gguf) | Q4_K_M | 1.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/RAIF-Qwen2.5-1.5B-GGUF/resolve/main/RAIF-Qwen2.5-1.5B.Q5_K_S.gguf) | Q5_K_S | 1.2 | | | [GGUF](https://huggingface.co/mradermacher/RAIF-Qwen2.5-1.5B-GGUF/resolve/main/RAIF-Qwen2.5-1.5B.Q5_K_M.gguf) | Q5_K_M | 1.2 | | | [GGUF](https://huggingface.co/mradermacher/RAIF-Qwen2.5-1.5B-GGUF/resolve/main/RAIF-Qwen2.5-1.5B.Q6_K.gguf) | Q6_K | 1.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/RAIF-Qwen2.5-1.5B-GGUF/resolve/main/RAIF-Qwen2.5-1.5B.Q8_0.gguf) | Q8_0 | 1.7 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/RAIF-Qwen2.5-1.5B-GGUF/resolve/main/RAIF-Qwen2.5-1.5B.f16.gguf) | f16 | 3.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
hyu1/model1
hyu1
2025-05-22T13:40:45Z
24
0
transformers
[ "transformers", "pytorch", "safetensors", "gguf", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-05-21T16:57:43Z
--- base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** hyu1 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
emaanbilal/legalQA-prompt-tuning-meta-llama-Llama-3.2-1B-Instruct
emaanbilal
2025-05-22T13:30:43Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-22T06:18:55Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
videos-jobz-hunting-pakistan-viral/Jobz.Hunting.Pakistan.viral.Video-Original-link
videos-jobz-hunting-pakistan-viral
2025-05-22T13:20:15Z
0
0
null
[ "region:us" ]
null
2025-05-22T13:19:38Z
<a rel="nofollow" href="https://iccnews.xyz/leaked?dd">🌐 Jobz Hunting Pakistan Viral Video Original Full HD🟢==►► WATCH NOW</a> <a rel="nofollow" href="https://iccnews.xyz/leaked?dd">🔴 CLICK HERE 🌐==►► Download Now)</a> <a rel="nofollow" href="https://iccnews.xyz/leaked?dd"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a>
the-acorn-ai/simon-qwen3-4b-base-kp-4k-self-play-with-role-step_00064
the-acorn-ai
2025-05-22T12:06:21Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-22T12:04:26Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
fpadovani/de_wiki_clm_30
fpadovani
2025-05-22T12:02:48Z
5
1
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-08T09:05:45Z
--- library_name: transformers tags: - generated_from_trainer model-index: - name: de_wiki_clm_30 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # de_wiki_clm_30 This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 4.0348 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - seed: 30 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 40000 - training_steps: 100000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-------:|:------:|:---------------:| | No log | 1.0796 | 2000 | 7.8191 | | 7.928 | 2.1592 | 4000 | 7.0870 | | 7.928 | 3.2389 | 6000 | 6.6422 | | 6.6946 | 4.3185 | 8000 | 6.2840 | | 6.6946 | 5.3981 | 10000 | 5.9706 | | 6.037 | 6.4777 | 12000 | 5.6935 | | 6.037 | 7.5574 | 14000 | 5.4614 | | 5.5288 | 8.6370 | 16000 | 5.2527 | | 5.5288 | 9.7166 | 18000 | 5.0790 | | 5.1465 | 10.7962 | 20000 | 4.9348 | | 5.1465 | 11.8758 | 22000 | 4.8114 | | 4.8667 | 12.9555 | 24000 | 4.7085 | | 4.8667 | 14.0351 | 26000 | 4.6242 | | 4.6478 | 15.1147 | 28000 | 4.5389 | | 4.6478 | 16.1943 | 30000 | 4.4701 | | 4.4727 | 17.2740 | 32000 | 4.4099 | | 4.4727 | 18.3536 | 34000 | 4.3633 | | 4.3307 | 19.4332 | 36000 | 4.3184 | | 4.3307 | 20.5128 | 38000 | 4.2779 | | 4.2116 | 21.5924 | 40000 | 4.2453 | | 4.2116 | 22.6721 | 42000 | 4.2135 | | 4.1017 | 23.7517 | 44000 | 4.1839 | | 4.1017 | 24.8313 | 46000 | 4.1570 | | 4.0019 | 25.9109 | 48000 | 4.1387 | | 4.0019 | 26.9906 | 50000 | 4.1239 | | 3.9164 | 28.0702 | 52000 | 4.1119 | | 3.9164 | 29.1498 | 54000 | 4.1000 | | 3.8451 | 30.2294 | 56000 | 4.0912 | | 3.8451 | 31.3090 | 58000 | 4.0843 | | 3.7863 | 32.3887 | 60000 | 4.0820 | | 3.7863 | 33.4683 | 62000 | 4.0735 | | 3.7356 | 34.5479 | 64000 | 4.0649 | | 3.7356 | 35.6275 | 66000 | 4.0574 | | 3.6893 | 36.7072 | 68000 | 4.0564 | | 3.6893 | 37.7868 | 70000 | 4.0526 | | 3.6492 | 38.8664 | 72000 | 4.0485 | | 3.6492 | 39.9460 | 74000 | 4.0457 | | 3.6111 | 41.0256 | 76000 | 4.0483 | | 3.6111 | 42.1053 | 78000 | 4.0443 | | 3.5749 | 43.1849 | 80000 | 4.0452 | | 3.5749 | 44.2645 | 82000 | 4.0453 | | 3.5442 | 45.3441 | 84000 | 4.0435 | | 3.5442 | 46.4238 | 86000 | 4.0421 | | 3.5184 | 47.5034 | 88000 | 4.0403 | | 3.5184 | 48.5830 | 90000 | 4.0411 | | 3.4926 | 49.6626 | 92000 | 4.0383 | | 3.4926 | 50.7422 | 94000 | 4.0385 | | 3.4715 | 51.8219 | 96000 | 4.0355 | | 3.4715 | 52.9015 | 98000 | 4.0359 | | 3.4519 | 53.9811 | 100000 | 4.0348 | ### Framework versions - Transformers 4.45.2 - Pytorch 2.5.1+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
BITSong/song-LLM_aug
BITSong
2025-05-22T11:22:25Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-22T06:04:50Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
fcakyon/florence-2-base
fcakyon
2025-05-22T10:52:02Z
0
0
null
[ "pytorch", "florence2", "vision", "image-text-to-text", "custom_code", "arxiv:2311.06242", "license:mit", "region:us" ]
image-text-to-text
2025-05-22T09:58:06Z
--- license: mit license_link: https://huggingface.co/microsoft/Florence-2-base/resolve/main/LICENSE pipeline_tag: image-text-to-text tags: - vision --- # Florence-2: Advancing a Unified Representation for a Variety of Vision Tasks This repo includes 3 important fixes that original repo misses: - https://huggingface.co/microsoft/Florence-2-base/discussions/26 - https://huggingface.co/microsoft/Florence-2-base/discussions/17 - https://huggingface.co/microsoft/Florence-2-large/discussions/93 ## Model Summary This Hub repository contains a HuggingFace's `transformers` implementation of Florence-2 model from Microsoft. Florence-2 is an advanced vision foundation model that uses a prompt-based approach to handle a wide range of vision and vision-language tasks. Florence-2 can interpret simple text prompts to perform tasks like captioning, object detection, and segmentation. It leverages our FLD-5B dataset, containing 5.4 billion annotations across 126 million images, to master multi-task learning. The model's sequence-to-sequence architecture enables it to excel in both zero-shot and fine-tuned settings, proving to be a competitive vision foundation model. Resources and Technical Documentation: + [Florence-2 technical report](https://arxiv.org/abs/2311.06242). + [Jupyter Notebook for inference and visualization of Florence-2-large model](https://huggingface.co/microsoft/Florence-2-large/blob/main/sample_inference.ipynb) | Model | Model size | Model Description | | ------- | ------------- | ------------- | | Florence-2-base[[HF]](https://huggingface.co/microsoft/Florence-2-base) | 0.23B | Pretrained model with FLD-5B | Florence-2-large[[HF]](https://huggingface.co/microsoft/Florence-2-large) | 0.77B | Pretrained model with FLD-5B | Florence-2-base-ft[[HF]](https://huggingface.co/microsoft/Florence-2-base-ft) | 0.23B | Finetuned model on a colletion of downstream tasks | Florence-2-large-ft[[HF]](https://huggingface.co/microsoft/Florence-2-large-ft) | 0.77B | Finetuned model on a colletion of downstream tasks ## How to Get Started with the Model Use the code below to get started with the model. All models are trained with float16. ```python import requests from PIL import Image from transformers import AutoProcessor, AutoModelForCausalLM device = "cuda:0" if torch.cuda.is_available() else "cpu" torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32 model = AutoModelForCausalLM.from_pretrained("microsoft/Florence-2-base", torch_dtype=torch_dtype, trust_remote_code=True).to(device) processor = AutoProcessor.from_pretrained("microsoft/Florence-2-base", trust_remote_code=True) prompt = "<OD>" url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true" image = Image.open(requests.get(url, stream=True).raw) inputs = processor(text=prompt, images=image, return_tensors="pt").to(device, torch_dtype) generated_ids = model.generate( input_ids=inputs["input_ids"], pixel_values=inputs["pixel_values"], max_new_tokens=1024, do_sample=False, num_beams=3, ) generated_text = processor.batch_decode(generated_ids, skip_special_tokens=False)[0] parsed_answer = processor.post_process_generation(generated_text, task="<OD>", image_size=(image.width, image.height)) print(parsed_answer) ``` ## Tasks This model is capable of performing different tasks through changing the prompts. First, let's define a function to run a prompt. <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import AutoProcessor, AutoModelForCausalLM device = "cuda:0" if torch.cuda.is_available() else "cpu" torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32 model = AutoModelForCausalLM.from_pretrained("microsoft/Florence-2-base", torch_dtype=torch_dtype, trust_remote_code=True).to(device) processor = AutoProcessor.from_pretrained("microsoft/Florence-2-base", trust_remote_code=True) url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true" image = Image.open(requests.get(url, stream=True).raw) def run_example(task_prompt, text_input=None): if text_input is None: prompt = task_prompt else: prompt = task_prompt + text_input inputs = processor(text=prompt, images=image, return_tensors="pt").to(device, torch_dtype) generated_ids = model.generate( input_ids=inputs["input_ids"], pixel_values=inputs["pixel_values"], max_new_tokens=1024, num_beams=3 ) generated_text = processor.batch_decode(generated_ids, skip_special_tokens=False)[0] parsed_answer = processor.post_process_generation(generated_text, task=task_prompt, image_size=(image.width, image.height)) print(parsed_answer) ``` </details> Here are the tasks `Florence-2` could perform: <details> <summary> Click to expand </summary> ### Caption ```python prompt = "<CAPTION>" run_example(prompt) ``` ### Detailed Caption ```python prompt = "<DETAILED_CAPTION>" run_example(prompt) ``` ### More Detailed Caption ```python prompt = "<MORE_DETAILED_CAPTION>" run_example(prompt) ``` ### Caption to Phrase Grounding caption to phrase grounding task requires additional text input, i.e. caption. Caption to phrase grounding results format: {'\<CAPTION_TO_PHRASE_GROUNDING>': {'bboxes': [[x1, y1, x2, y2], ...], 'labels': ['', '', ...]}} ```python task_prompt = "<CAPTION_TO_PHRASE_GROUNDING>" results = run_example(task_prompt, text_input="A green car parked in front of a yellow building.") ``` ### Object Detection OD results format: {'\<OD>': {'bboxes': [[x1, y1, x2, y2], ...], 'labels': ['label1', 'label2', ...]} } ```python prompt = "<OD>" run_example(prompt) ``` ### Dense Region Caption Dense region caption results format: {'\<DENSE_REGION_CAPTION>' : {'bboxes': [[x1, y1, x2, y2], ...], 'labels': ['label1', 'label2', ...]} } ```python prompt = "<DENSE_REGION_CAPTION>" run_example(prompt) ``` ### Region proposal Dense region caption results format: {'\<REGION_PROPOSAL>': {'bboxes': [[x1, y1, x2, y2], ...], 'labels': ['', '', ...]}} ```python prompt = "<REGION_PROPOSAL>" run_example(prompt) ``` ### OCR ```python prompt = "<OCR>" run_example(prompt) ``` ### OCR with Region OCR with region output format: {'\<OCR_WITH_REGION>': {'quad_boxes': [[x1, y1, x2, y2, x3, y3, x4, y4], ...], 'labels': ['text1', ...]}} ```python prompt = "<OCR_WITH_REGION>" run_example(prompt) ``` for More detailed examples, please refer to [notebook](https://huggingface.co/microsoft/Florence-2-large/blob/main/sample_inference.ipynb) </details> # Benchmarks ## Florence-2 Zero-shot performance The following table presents the zero-shot performance of generalist vision foundation models on image captioning and object detection evaluation tasks. These models have not been exposed to the training data of the evaluation tasks during their training phase. | Method | #params | COCO Cap. test CIDEr | NoCaps val CIDEr | TextCaps val CIDEr | COCO Det. val2017 mAP | |--------|---------|----------------------|------------------|--------------------|-----------------------| | Flamingo | 80B | 84.3 | - | - | - | | Florence-2-base| 0.23B | 133.0 | 118.7 | 70.1 | 34.7 | | Florence-2-large| 0.77B | 135.6 | 120.8 | 72.8 | 37.5 | The following table continues the comparison with performance on other vision-language evaluation tasks. | Method | Flickr30k test R@1 | Refcoco val Accuracy | Refcoco test-A Accuracy | Refcoco test-B Accuracy | Refcoco+ val Accuracy | Refcoco+ test-A Accuracy | Refcoco+ test-B Accuracy | Refcocog val Accuracy | Refcocog test Accuracy | Refcoco RES val mIoU | |--------|----------------------|----------------------|-------------------------|-------------------------|-----------------------|--------------------------|--------------------------|-----------------------|------------------------|----------------------| | Kosmos-2 | 78.7 | 52.3 | 57.4 | 47.3 | 45.5 | 50.7 | 42.2 | 60.6 | 61.7 | - | | Florence-2-base | 83.6 | 53.9 | 58.4 | 49.7 | 51.5 | 56.4 | 47.9 | 66.3 | 65.1 | 34.6 | | Florence-2-large | 84.4 | 56.3 | 61.6 | 51.4 | 53.6 | 57.9 | 49.9 | 68.0 | 67.0 | 35.8 | ## Florence-2 finetuned performance We finetune Florence-2 models with a collection of downstream tasks, resulting two generalist models *Florence-2-base-ft* and *Florence-2-large-ft* that can conduct a wide range of downstream tasks. The table below compares the performance of specialist and generalist models on various captioning and Visual Question Answering (VQA) tasks. Specialist models are fine-tuned specifically for each task, whereas generalist models are fine-tuned in a task-agnostic manner across all tasks. The symbol "▲" indicates the usage of external OCR as input. | Method | # Params | COCO Caption Karpathy test CIDEr | NoCaps val CIDEr | TextCaps val CIDEr | VQAv2 test-dev Acc | TextVQA test-dev Acc | VizWiz VQA test-dev Acc | |----------------|----------|-----------------------------------|------------------|--------------------|--------------------|----------------------|-------------------------| | **Specialist Models** | | | | | | | | | CoCa | 2.1B | 143.6 | 122.4 | - | 82.3 | - | - | | BLIP-2 | 7.8B | 144.5 | 121.6 | - | 82.2 | - | - | | GIT2 | 5.1B | 145.0 | 126.9 | 148.6 | 81.7 | 67.3 | 71.0 | | Flamingo | 80B | 138.1 | - | - | 82.0 | 54.1 | 65.7 | | PaLI | 17B | 149.1 | 127.0 | 160.0▲ | 84.3 | 58.8 / 73.1▲ | 71.6 / 74.4▲ | | PaLI-X | 55B | 149.2 | 126.3 | 147.0 / 163.7▲ | 86.0 | 71.4 / 80.8▲ | 70.9 / 74.6▲ | | **Generalist Models** | | | | | | | | | Unified-IO | 2.9B | - | 100.0 | - | 77.9 | - | 57.4 | | Florence-2-base-ft | 0.23B | 140.0 | 116.7 | 143.9 | 79.7 | 63.6 | 63.6 | | Florence-2-large-ft | 0.77B | 143.3 | 124.9 | 151.1 | 81.7 | 73.5 | 72.6 | | Method | # Params | COCO Det. val2017 mAP | Flickr30k test R@1 | RefCOCO val Accuracy | RefCOCO test-A Accuracy | RefCOCO test-B Accuracy | RefCOCO+ val Accuracy | RefCOCO+ test-A Accuracy | RefCOCO+ test-B Accuracy | RefCOCOg val Accuracy | RefCOCOg test Accuracy | RefCOCO RES val mIoU | |----------------------|----------|-----------------------|--------------------|----------------------|-------------------------|-------------------------|------------------------|---------------------------|---------------------------|------------------------|-----------------------|------------------------| | **Specialist Models** | | | | | | | | | | | | | | SeqTR | - | - | - | 83.7 | 86.5 | 81.2 | 71.5 | 76.3 | 64.9 | 74.9 | 74.2 | - | | PolyFormer | - | - | - | 90.4 | 92.9 | 87.2 | 85.0 | 89.8 | 78.0 | 85.8 | 85.9 | 76.9 | | UNINEXT | 0.74B | 60.6 | - | 92.6 | 94.3 | 91.5 | 85.2 | 89.6 | 79.8 | 88.7 | 89.4 | - | | Ferret | 13B | - | - | 89.5 | 92.4 | 84.4 | 82.8 | 88.1 | 75.2 | 85.8 | 86.3 | - | | **Generalist Models** | | | | | | | | | | | | | | UniTAB | - | - | - | 88.6 | 91.1 | 83.8 | 81.0 | 85.4 | 71.6 | 84.6 | 84.7 | - | | Florence-2-base-ft | 0.23B | 41.4 | 84.0 | 92.6 | 94.8 | 91.5 | 86.8 | 91.7 | 82.2 | 89.8 | 82.2 | 78.0 | | Florence-2-large-ft| 0.77B | 43.4 | 85.2 | 93.4 | 95.3 | 92.0 | 88.3 | 92.9 | 83.6 | 91.2 | 91.7 | 80.5 | ## BibTex and citation info ``` @article{xiao2023florence, title={Florence-2: Advancing a unified representation for a variety of vision tasks}, author={Xiao, Bin and Wu, Haiping and Xu, Weijian and Dai, Xiyang and Hu, Houdong and Lu, Yumao and Zeng, Michael and Liu, Ce and Yuan, Lu}, journal={arXiv preprint arXiv:2311.06242}, year={2023} } ```
katrina-lim-kiffy-telegram-link/watch.katrina.lim.kiffy.telegram.link
katrina-lim-kiffy-telegram-link
2025-05-22T06:15:12Z
0
0
null
[ "region:us" ]
null
2025-05-22T06:08:12Z
[►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️ ](https://the-goat-sanda.blogspot.com/p/goat-sanda-02.html) [►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️ ](https://the-goat-sanda.blogspot.com/p/goat-sanda-02.html) **[WATCH NOW](https://the-goat-sanda.blogspot.com/p/goat-sanda-02.html)** <a href="https://the-goat-sanda.blogspot.com/p/goat-sanda-02.html"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
DanielNRU/pollen-ner2-1750
DanielNRU
2025-05-22T06:13:34Z
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:DeepPavlov/bert-base-bg-cs-pl-ru-cased", "base_model:adapter:DeepPavlov/bert-base-bg-cs-pl-ru-cased", "region:us" ]
null
2025-05-22T06:07:47Z
--- library_name: peft base_model: DeepPavlov/bert-base-bg-cs-pl-ru-cased tags: - generated_from_trainer metrics: - precision - recall - f1 model-index: - name: pollen-ner2-1750 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pollen-ner2-1750 This model is a fine-tuned version of [DeepPavlov/bert-base-bg-cs-pl-ru-cased](https://huggingface.co/DeepPavlov/bert-base-bg-cs-pl-ru-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1661 - Precision: 0.8287 - Recall: 0.8936 - F1: 0.8599 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:| | No log | 1.0 | 219 | 0.1661 | 0.8287 | 0.8936 | 0.8599 | | No log | 2.0 | 438 | 0.1633 | 0.8318 | 0.8835 | 0.8569 | | 0.2581 | 3.0 | 657 | 0.1629 | 0.8212 | 0.8855 | 0.8522 | ### Framework versions - PEFT 0.15.2 - Transformers 4.51.3 - Pytorch 2.7.0+cu128 - Datasets 3.5.0 - Tokenizers 0.21.1
ajmalmahmood/LunarLander-v2
ajmalmahmood
2025-05-22T06:01:08Z
0
0
null
[ "tensorboard", "LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
2025-05-22T05:34:02Z
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -159.69 +/- 130.26 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'exp_name': 'ppo' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'LunarLander-v2' 'total_timesteps': 50000 'learning_rate': 0.00025 'num_envs': 4 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'ajmalmahmood/LunarLander-v2' 'batch_size': 512 'minibatch_size': 128} ```
lenerkasseos/zxcvx
lenerkasseos
2025-05-22T05:54:21Z
0
0
null
[ "license:bigcode-openrail-m", "region:us" ]
null
2025-05-22T05:54:21Z
--- license: bigcode-openrail-m ---
John6666/icebergmix-v10-sdxl
John6666
2025-05-22T05:47:43Z
0
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "anime", "girls", "characters", "boleromix", "wai", "hassaku", "rouwei", "Illustrious XL v1.0", "illustrious", "en", "base_model:OnomaAIResearch/Illustrious-XL-v1.0", "base_model:finetune:OnomaAIResearch/Illustrious-XL-v1.0", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2025-05-22T05:42:29Z
--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ language: - en library_name: diffusers pipeline_tag: text-to-image tags: - text-to-image - stable-diffusion - stable-diffusion-xl - anime - girls - characters - boleromix - wai - hassaku - rouwei - Illustrious XL v1.0 - illustrious base_model: OnomaAIResearch/Illustrious-XL-v1.0 --- Original model is [here](https://civitai.com/models/1605847/icebergmix?modelVersionId=1817245). This model created by [IcebergMM](https://civitai.com/user/IcebergMM).
ajmalmahmood/ppo-CartPole-v1
ajmalmahmood
2025-05-22T05:25:39Z
0
0
null
[ "tensorboard", "CartPole-v1", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
2025-05-22T05:25:30Z
--- tags: - CartPole-v1 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 186.50 +/- 81.13 name: mean_reward verified: false --- # PPO Agent Playing CartPole-v1 This is a trained model of a PPO agent playing CartPole-v1. # Hyperparameters ```python {'exp_name': 'ppo' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'CartPole-v1' 'total_timesteps': 50000 'learning_rate': 0.00025 'num_envs': 4 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'ajmalmahmood/ppo-CartPole-v1' 'batch_size': 512 'minibatch_size': 128} ```
the-acorn-ai/Qwen3-4B-Base-4K-KuhnPoker-Self-Vanilla-0522-Zichen-step_00224
the-acorn-ai
2025-05-22T05:19:37Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-22T05:16:23Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
chancharikm/qwen2.5-vl-72b-cam-motion-preview
chancharikm
2025-05-22T04:02:33Z
0
0
transformers
[ "transformers", "safetensors", "qwen2_5_vl", "image-text-to-text", "llama-factory", "full", "generated_from_trainer", "video-text-to-text", "arxiv:2404.01291", "arxiv:2504.15376", "base_model:Qwen/Qwen2.5-VL-72B-Instruct", "base_model:finetune:Qwen/Qwen2.5-VL-72B-Instruct", "license:other", "text-generation-inference", "endpoints_compatible", "region:us" ]
video-text-to-text
2025-05-22T00:46:27Z
--- base_model: Qwen/Qwen2.5-VL-72B-Instruct library_name: transformers license: other tags: - llama-factory - full - generated_from_trainer pipeline_tag: video-text-to-text model-index: - name: bal_imb_cap_full_lr2e-4_epoch10.0_freezevisTrue_fps8 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> ## Model description This model is a fine-tuned version of [Qwen/Qwen2.5-VL-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-72B-Instruct) on the current most, high-quality camera motion dataset that is publically available. This preview model is the current SOTA for classifying camera motion or being used for video-text retrieval with camera motion captions using [VQAScore](https://arxiv.org/pdf/2404.01291). Find more information about our work on our Github page for [CameraBench](https://github.com/sy77777en/CameraBench). *More updates to the benchmark and models will come in the future. Stay tuned!* ## Intended uses & limitations The usage is identical to a [Qwen2.5-VL](https://github.com/QwenLM/Qwen2.5-VL) model. Our model is primarily useful for camera motion classification in videos as well as video-text retrieval (current SOTA in both tasks). **A quick demo is shown below:** <details> <summary>Generative Scoring (for classification and retrieval):</summary> ```python # Import necessary libraries from transformers import Qwen2_5_VLForConditionalGeneration, AutoProcessor from qwen_vl_utils import process_vision_info import torch # Load the model model = Qwen2_5_VLForConditionalGeneration.from_pretrained( "chancharikm/qwen2.5-vl-72B-cam-motion-preview", torch_dtype="auto", device_map="auto" ) processor = AutoProcessor.from_pretrained("Qwen/Qwen2.5-VL-72B-Instruct") # Prepare input data video_path = "file:///path/to/video1.mp4" text_description = "the camera tilting upward" question = f"Does this video show \"{text_description}\"?" # Format the input for the model messages = [ { "role": "user", "content": [ { "type": "video", "video": video_path, "fps": 8.0, # Recommended FPS for optimal inference }, {"type": "text", "text": question}, ], } ] text = processor.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) image_inputs, video_inputs, video_kwargs = process_vision_info(messages, return_video_kwargs=True) inputs = processor( text=[text], images=image_inputs, videos=video_inputs, padding=True, return_tensors="pt", **video_kwargs ) inputs = inputs.to("cuda") # Generate with score output with torch.inference_mode(): outputs = model.generate( **inputs, max_new_tokens=1, do_sample=False, # Use greedy decoding to get reliable logprobs output_scores=True, return_dict_in_generate=True ) # Calculate probability of "Yes" response scores = outputs.scores[0] probs = torch.nn.functional.softmax(scores, dim=-1) yes_token_id = processor.tokenizer.encode("Yes")[0] score = probs[0, yes_token_id].item() print(f"Video: {video_path}") print(f"Description: '{text_description}'") print(f"Score: {score:.4f}") ``` </details> <details> <summary>Natural Language Generation</summary> ```python # The model is trained on 8.0 FPS which we recommend for optimal inference from transformers import Qwen2_5_VLForConditionalGeneration, AutoProcessor from qwen_vl_utils import process_vision_info # default: Load the model on the available device(s) model = Qwen2_5_VLForConditionalGeneration.from_pretrained( "chancharikm/qwen2.5-vl-72B-cam-motion-preview", torch_dtype="auto", device_map="auto" ) # We recommend enabling flash_attention_2 for better acceleration and memory saving, especially in multi-image and video scenarios. # model = Qwen2_5_VLForConditionalGeneration.from_pretrained( # "chancharikm/qwen2.5-vl-72B-cam-motion-preview", # torch_dtype=torch.bfloat16, # attn_implementation="flash_attention_2", # device_map="auto", # ) # default processor processor = AutoProcessor.from_pretrained("Qwen/Qwen2.5-VL-72B-Instruct") messages = [ { "role": "user", "content": [ { "type": "video", "video": "file:///path/to/video1.mp4", "fps": 8.0, }, {"type": "text", "text": "Describe the camera motion in this video."}, ], } ] text = processor.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) image_inputs, video_inputs, video_kwargs = process_vision_info(messages, return_video_kwargs=True) inputs = processor( text=[text], images=image_inputs, videos=video_inputs, fps=fps, padding=True, return_tensors="pt", **video_kwargs, ) inputs = inputs.to("cuda") # Inference generated_ids = model.generate(**inputs, max_new_tokens=128) generated_ids_trimmed = [ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids) ] output_text = processor.batch_decode( generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False ) print(output_text) ``` </details> ## Training and evaluation data Training and evaluation data can be found in our [repo](https://github.com/sy77777en/CameraBench). ## ✏️ Citation If you find this repository useful for your research, please use the following. ``` @article{lin2025camerabench, title={Towards Understanding Camera Motions in Any Video}, author={Lin, Zhiqiu and Cen, Siyuan and Jiang, Daniel and Karhade, Jay and Wang, Hewei and Mitra, Chancharik and Ling, Tiffany and Huang, Yuhan and Liu, Sifan and Chen, Mingyu and Zawar, Rushikesh and Bai, Xue and Du, Yilun and Gan, Chuang and Ramanan, Deva}, journal={arXiv preprint arXiv:2504.15376}, year={2025}, } ```
ethanos7909/chianeng
ethanos7909
2025-05-22T03:37:54Z
0
0
null
[ "license:other", "region:us" ]
null
2025-05-22T02:18:25Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md ---
amps93/Qwen3-1.7B_qlora
amps93
2025-05-22T03:36:19Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-22T03:36:16Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
shanchen/ds-limo-linearja-250
shanchen
2025-05-22T03:34:03Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "mergekit", "merge", "conversational", "arxiv:2203.05482", "base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B", "base_model:merge:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B", "base_model:shanchen/ds-limo-ja-250", "base_model:merge:shanchen/ds-limo-ja-250", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-22T03:27:49Z
--- base_model: - shanchen/ds-limo-ja-250 - deepseek-ai/DeepSeek-R1-Distill-Qwen-7B library_name: transformers tags: - mergekit - merge --- # mlinearja This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Linear](https://arxiv.org/abs/2203.05482) merge method. ### Models Merged The following models were included in the merge: * [shanchen/ds-limo-ja-250](https://huggingface.co/shanchen/ds-limo-ja-250) * [deepseek-ai/DeepSeek-R1-Distill-Qwen-7B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: deepseek-ai/DeepSeek-R1-Distill-Qwen-7B parameters: weight: 1.0 - model: shanchen/ds-limo-ja-250 parameters: weight: 0.5 merge_method: linear dtype: float16 ```
shanchen/ds-limo-mer4ge-250
shanchen
2025-05-22T02:52:14Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "mergekit", "merge", "conversational", "arxiv:2306.01708", "base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B", "base_model:merge:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B", "base_model:shanchen/ds-limo-fr-250", "base_model:merge:shanchen/ds-limo-fr-250", "base_model:shanchen/ds-limo-ja-250", "base_model:merge:shanchen/ds-limo-ja-250", "base_model:shanchen/ds-limo-te-250", "base_model:merge:shanchen/ds-limo-te-250", "base_model:shanchen/ds-limo-th-250", "base_model:merge:shanchen/ds-limo-th-250", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-22T02:46:35Z
--- base_model: - shanchen/ds-limo-te-250 - deepseek-ai/DeepSeek-R1-Distill-Qwen-7B - shanchen/ds-limo-th-250 - shanchen/ds-limo-ja-250 - shanchen/ds-limo-fr-250 library_name: transformers tags: - mergekit - merge --- # model1 This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [deepseek-ai/DeepSeek-R1-Distill-Qwen-7B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) as a base. ### Models Merged The following models were included in the merge: * [shanchen/ds-limo-te-250](https://huggingface.co/shanchen/ds-limo-te-250) * [shanchen/ds-limo-th-250](https://huggingface.co/shanchen/ds-limo-th-250) * [shanchen/ds-limo-ja-250](https://huggingface.co/shanchen/ds-limo-ja-250) * [shanchen/ds-limo-fr-250](https://huggingface.co/shanchen/ds-limo-fr-250) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: shanchen/ds-limo-fr-250 parameters: density: 0.25 weight: 0.25 - model: shanchen/ds-limo-th-250 parameters: density: 0.25 weight: 0.25 - model: shanchen/ds-limo-te-250 parameters: density: 0.25 weight: 0.25 - model: shanchen/ds-limo-ja-250 parameters: density: 0.25 weight: 0.25 merge_method: ties base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-7B parameters: normalize: false int8_mask: true dtype: float16 ```
fpadovani/fr_wiki_clm_13
fpadovani
2025-05-21T23:38:49Z
9
1
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-07T07:45:17Z
--- library_name: transformers tags: - generated_from_trainer model-index: - name: fr_wiki_clm_13 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fr_wiki_clm_13 This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.4708 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - seed: 13 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 40000 - training_steps: 100000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-------:|:------:|:---------------:| | No log | 1.7190 | 2000 | 7.1490 | | 7.2211 | 3.4379 | 4000 | 5.8869 | | 7.2211 | 5.1569 | 6000 | 5.4206 | | 5.4603 | 6.8758 | 8000 | 5.0644 | | 5.4603 | 8.5948 | 10000 | 4.7704 | | 4.8205 | 10.3137 | 12000 | 4.5274 | | 4.8205 | 12.0327 | 14000 | 4.3241 | | 4.369 | 13.7516 | 16000 | 4.1509 | | 4.369 | 15.4706 | 18000 | 4.0131 | | 4.039 | 17.1895 | 20000 | 3.8967 | | 4.039 | 18.9085 | 22000 | 3.7902 | | 3.792 | 20.6274 | 24000 | 3.7083 | | 3.792 | 22.3464 | 26000 | 3.6318 | | 3.5988 | 24.0653 | 28000 | 3.5716 | | 3.5988 | 25.7843 | 30000 | 3.5175 | | 3.4366 | 27.5032 | 32000 | 3.4818 | | 3.4366 | 29.2222 | 34000 | 3.4478 | | 3.303 | 30.9411 | 36000 | 3.4144 | | 3.303 | 32.6601 | 38000 | 3.3936 | | 3.1808 | 34.3790 | 40000 | 3.3769 | | 3.1808 | 36.0980 | 42000 | 3.3654 | | 3.0681 | 37.8169 | 44000 | 3.3452 | | 3.0681 | 39.5359 | 46000 | 3.3394 | | 2.9562 | 41.2548 | 48000 | 3.3389 | | 2.9562 | 42.9738 | 50000 | 3.3301 | | 2.8647 | 44.6927 | 52000 | 3.3356 | | 2.8647 | 46.4117 | 54000 | 3.3422 | | 2.7848 | 48.1306 | 56000 | 3.3468 | | 2.7848 | 49.8496 | 58000 | 3.3467 | | 2.713 | 51.5685 | 60000 | 3.3575 | | 2.713 | 53.2875 | 62000 | 3.3672 | | 2.6546 | 55.0064 | 64000 | 3.3688 | | 2.6546 | 56.7254 | 66000 | 3.3764 | | 2.5942 | 58.4443 | 68000 | 3.3907 | | 2.5942 | 60.1633 | 70000 | 3.3962 | | 2.5486 | 61.8823 | 72000 | 3.4000 | | 2.5486 | 63.6012 | 74000 | 3.4071 | | 2.5009 | 65.3202 | 76000 | 3.4164 | | 2.5009 | 67.0391 | 78000 | 3.4277 | | 2.4625 | 68.7581 | 80000 | 3.4293 | | 2.4625 | 70.4770 | 82000 | 3.4382 | | 2.4249 | 72.1960 | 84000 | 3.4481 | | 2.4249 | 73.9149 | 86000 | 3.4495 | | 2.3918 | 75.6339 | 88000 | 3.4554 | | 2.3918 | 77.3528 | 90000 | 3.4596 | | 2.3637 | 79.0718 | 92000 | 3.4637 | | 2.3637 | 80.7907 | 94000 | 3.4666 | | 2.337 | 82.5097 | 96000 | 3.4702 | | 2.337 | 84.2286 | 98000 | 3.4712 | | 2.3163 | 85.9476 | 100000 | 3.4708 | ### Framework versions - Transformers 4.45.2 - Pytorch 2.5.1+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
joeyderrrr/grpo-16-vllm
joeyderrrr
2025-05-21T21:52:56Z
0
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "unsloth", "trl", "grpo", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-21T21:21:07Z
--- library_name: transformers tags: - unsloth - trl - grpo --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
PirateXX/AI-Content-Detector
PirateXX
2025-05-21T21:50:50Z
327
3
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "license:artistic-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-01-18T15:50:31Z
--- license: artistic-2.0 --- # AI Content Detector This model uses SOTA models to detect AI generated content.<br/> Label_0 represents Fake<br/> Label_1 represents Real View our website: [AI Content Detector](https://www.aiforfree.online/tools/text-to-detect)
BootesVoid/cmayfwbj003hzu1cg91a5u874_cmayfzc5o03i6u1cgtkpq9z0x
BootesVoid
2025-05-21T21:43:33Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-05-21T21:43:31Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: ONLYFANS --- # Cmayfwbj003Hzu1Cg91A5U874_Cmayfzc5O03I6U1Cgtkpq9Z0X <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `ONLYFANS` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "ONLYFANS", "lora_weights": "https://huggingface.co/BootesVoid/cmayfwbj003hzu1cg91a5u874_cmayfzc5o03i6u1cgtkpq9z0x/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cmayfwbj003hzu1cg91a5u874_cmayfzc5o03i6u1cgtkpq9z0x', weight_name='lora.safetensors') image = pipeline('ONLYFANS').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cmayfwbj003hzu1cg91a5u874_cmayfzc5o03i6u1cgtkpq9z0x/discussions) to add images that show off what you’ve made with this LoRA.
ArtusDev/mistralai_Devstral-Small-2505_EXL3_6.5bpw_H8
ArtusDev
2025-05-21T21:35:33Z
0
0
vllm
[ "vllm", "safetensors", "mistral", "text2text-generation", "en", "fr", "de", "es", "pt", "it", "ja", "ko", "ru", "zh", "ar", "fa", "id", "ms", "ne", "pl", "ro", "sr", "sv", "tr", "uk", "vi", "hi", "bn", "base_model:mistralai/Devstral-Small-2505", "base_model:quantized:mistralai/Devstral-Small-2505", "license:apache-2.0", "exl3", "region:us" ]
text2text-generation
2025-05-21T20:15:24Z
--- language: - en - fr - de - es - pt - it - ja - ko - ru - zh - ar - fa - id - ms - ne - pl - ro - sr - sv - tr - uk - vi - hi - bn license: apache-2.0 library_name: vllm inference: false base_model: - mistralai/Devstral-Small-2505 base_model_relation: quantized quantized_by: ArtusDev extra_gated_description: >- If you want to learn more about how we process your personal data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>. pipeline_tag: text2text-generation --- # Devstral-Small-2505 Devstral is an agentic LLM for software engineering tasks built under a collaboration between [Mistral AI](https://mistral.ai/) and [All Hands AI](https://www.all-hands.dev/) 🙌. Devstral excels at using tools to explore codebases, editing multiple files and power software engineering agents. The model achieves remarkable performance on SWE-bench which positionates it as the #1 open source model on this [benchmark](#benchmark-results). It is finetuned from [Mistral-Small-3.1](https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Base-2503), therefore it has a long context window of up to 128k tokens. As a coding agent, Devstral is text-only and before fine-tuning from `Mistral-Small-3.1` the vision encoder was removed. For enterprises requiring specialized capabilities (increased context, domain-specific knowledge, etc.), we will release commercial models beyond what Mistral AI contributes to the community. Learn more about Devstral in our [blog post](https://mistral.ai/news/devstral). ## Key Features: - **Agentic coding**: Devstral is designed to excel at agentic coding tasks, making it a great choice for software engineering agents. - **lightweight**: with its compact size of just 24 billion parameters, Devstral is light enough to run on a single RTX 4090 or a Mac with 32GB RAM, making it an appropriate model for local deployment and on-device use. - **Apache 2.0 License**: Open license allowing usage and modification for both commercial and non-commercial purposes. - **Context Window**: A 128k context window. - **Tokenizer**: Utilizes a Tekken tokenizer with a 131k vocabulary size. ## Benchmark Results ### SWE-Bench Devstral achieves a score of 46.8% on SWE-Bench Verified, outperforming prior open-source SoTA by 6%. | Model | Scaffold | SWE-Bench Verified (%) | |------------------|--------------------|------------------------| | Devstral | OpenHands Scaffold | **46.8** | | GPT-4.1-mini | OpenAI Scaffold | 23.6 | | Claude 3.5 Haiku | Anthropic Scaffold | 40.6 | | SWE-smith-LM 32B | SWE-agent Scaffold | 40.2 | When evaluated under the same test scaffold (OpenHands, provided by All Hands AI 🙌), Devstral exceeds far larger models such as Deepseek-V3-0324 and Qwen3 232B-A22B. ![SWE Benchmark](assets/swe_bench.png) ## Usage We recommend to use Devstral with the [OpenHands](https://github.com/All-Hands-AI/OpenHands/tree/main) scaffold. You can use it either through our API or by running locally. ### API Follow these [instructions](https://docs.mistral.ai/getting-started/quickstart/#account-setup) to create a Mistral account and get an API key. Then run these commands to start the OpenHands docker container. ```bash export MISTRAL_API_KEY=<MY_KEY> docker pull docker.all-hands.dev/all-hands-ai/runtime:0.39-nikolaik mkdir -p ~/.openhands-state && echo '{"language":"en","agent":"CodeActAgent","max_iterations":null,"security_analyzer":null,"confirmation_mode":false,"llm_model":"mistral/devstral-small-2505","llm_api_key":"'$MISTRAL_API_KEY'","remote_runtime_resource_factor":null,"github_token":null,"enable_default_condenser":true}' > ~/.openhands-state/settings.json docker run -it --rm --pull=always \ -e SANDBOX_RUNTIME_CONTAINER_IMAGE=docker.all-hands.dev/all-hands-ai/runtime:0.39-nikolaik \ -e LOG_ALL_EVENTS=true \ -v /var/run/docker.sock:/var/run/docker.sock \ -v ~/.openhands-state:/.openhands-state \ -p 3000:3000 \ --add-host host.docker.internal:host-gateway \ --name openhands-app \ docker.all-hands.dev/all-hands-ai/openhands:0.39 ``` ### Local inference The model can also be deployed with the following libraries: - [`vllm (recommended)`](https://github.com/vllm-project/vllm): See [here](#vllm-recommended) - [`mistral-inference`](https://github.com/mistralai/mistral-inference): See [here](#mistral-inference) - [`transformers`](https://github.com/huggingface/transformers): See [here](#transformers) - [`LMStudio`](https://lmstudio.ai/): See [here](#lmstudio) - [`ollama`](https://github.com/ollama/ollama): See [here](#ollama) ### OpenHands (recommended) #### Launch a server to deploy Devstral-Small-2505 Make sure you launched an OpenAI-compatible server such as vLLM or Ollama as described above. Then, you can use OpenHands to interact with `Devstral-Small-2505`. In the case of the tutorial we spineed up a vLLM server running the command: ```bash vllm serve mistralai/Devstral-Small-2505 --tokenizer_mode mistral --config_format mistral --load_format mistral --tool-call-parser mistral --enable-auto-tool-choice --tensor-parallel-size 2 ``` The server address should be in the following format: `http://<your-server-url>:8000/v1` #### Launch OpenHands You can follow installation of OpenHands [here](https://docs.all-hands.dev/modules/usage/installation). The easiest way to launch OpenHands is to use the Docker image: ```bash docker pull docker.all-hands.dev/all-hands-ai/runtime:0.38-nikolaik docker run -it --rm --pull=always \ -e SANDBOX_RUNTIME_CONTAINER_IMAGE=docker.all-hands.dev/all-hands-ai/runtime:0.38-nikolaik \ -e LOG_ALL_EVENTS=true \ -v /var/run/docker.sock:/var/run/docker.sock \ -v ~/.openhands-state:/.openhands-state \ -p 3000:3000 \ --add-host host.docker.internal:host-gateway \ --name openhands-app \ docker.all-hands.dev/all-hands-ai/openhands:0.38 ``` Then, you can access the OpenHands UI at `http://localhost:3000`. #### Connect to the server When accessing the OpenHands UI, you will be prompted to connect to a server. You can use the advanced mode to connect to the server you launched earlier. Fill the following fields: - **Custom Model**: `openai/mistralai/Devstral-Small-2505` - **Base URL**: `http://<your-server-url>:8000/v1` - **API Key**: `token` (or any other token you used to launch the server if any) #### Use OpenHands powered by Devstral Now you're good to use Devstral Small inside OpenHands by **starting a new conversation**. Let's build a To-Do list app. <details> <summary>To-Do list app</summary 1. Let's ask Devstral to generate the app with the following prompt: ```txt Build a To-Do list app with the following requirements: - Built using FastAPI and React. - Make it a one page app that: - Allows to add a task. - Allows to delete a task. - Allows to mark a task as done. - Displays the list of tasks. - Store the tasks in a SQLite database. ``` ![Agent prompting](assets/tuto_open_hands/agent_prompting.png) 2. Let's see the result You should see the agent construct the app and be able to explore the code it generated. If it doesn't do it automatically, ask Devstral to deploy the app or do it manually, and then go the front URL deployment to see the app. ![Agent working](assets/tuto_open_hands/agent_working.png) ![App UI](assets/tuto_open_hands/app_ui.png) 3. Iterate Now that you have a first result you can iterate on it by asking your agent to improve it. For example, in the app generated we could click on a task to mark it checked but having a checkbox would improve UX. You could also ask it to add a feature to edit a task, or to add a feature to filter the tasks by status. Enjoy building with Devstral Small and OpenHands! </details> ### vLLM (recommended) We recommend using this model with the [vLLM library](https://github.com/vllm-project/vllm) to implement production-ready inference pipelines. **_Installation_** Make sure you install [`vLLM >= 0.8.5`](https://github.com/vllm-project/vllm/releases/tag/v0.8.5): ``` pip install vllm --upgrade ``` Doing so should automatically install [`mistral_common >= 1.5.5`](https://github.com/mistralai/mistral-common/releases/tag/v1.5.5). To check: ``` python -c "import mistral_common; print(mistral_common.__version__)" ``` You can also make use of a ready-to-go [docker image](https://github.com/vllm-project/vllm/blob/main/Dockerfile) or on the [docker hub](https://hub.docker.com/layers/vllm/vllm-openai/latest/images/sha256-de9032a92ffea7b5c007dad80b38fd44aac11eddc31c435f8e52f3b7404bbf39). #### Server We recommand that you use Devstral in a server/client setting. 1. Spin up a server: ``` vllm serve mistralai/Devstral-Small-2505 --tokenizer_mode mistral --config_format mistral --load_format mistral --tool-call-parser mistral --enable-auto-tool-choice --tensor-parallel-size 2 ``` 2. To ping the client you can use a simple Python snippet. ```py import requests import json from huggingface_hub import hf_hub_download url = "http://<your-server-url>:8000/v1/chat/completions" headers = {"Content-Type": "application/json", "Authorization": "Bearer token"} model = "mistralai/Devstral-Small-2505" def load_system_prompt(repo_id: str, filename: str) -> str: file_path = hf_hub_download(repo_id=repo_id, filename=filename) with open(file_path, "r") as file: system_prompt = file.read() return system_prompt SYSTEM_PROMPT = load_system_prompt(model, "SYSTEM_PROMPT.txt") messages = [ {"role": "system", "content": SYSTEM_PROMPT}, { "role": "user", "content": [ { "type": "text", "text": "<your-command>", }, ], }, ] data = {"model": model, "messages": messages, "temperature": 0.15} response = requests.post(url, headers=headers, data=json.dumps(data)) print(response.json()["choices"][0]["message"]["content"]) ``` ### Mistral-inference We recommend using mistral-inference to quickly try out / "vibe-check" Devstral. #### Install Make sure to have mistral_inference >= 1.6.0 installed. ```bash pip install mistral_inference --upgrade ``` #### Download ```python from huggingface_hub import snapshot_download from pathlib import Path mistral_models_path = Path.home().joinpath('mistral_models', 'Devstral') mistral_models_path.mkdir(parents=True, exist_ok=True) snapshot_download(repo_id="mistralai/Devstral-Small-2505", allow_patterns=["params.json", "consolidated.safetensors", "tekken.json"], local_dir=mistral_models_path) ``` #### Python You can run the model using the following command: ```bash mistral-chat $HOME/mistral_models/Devstral --instruct --max_tokens 300 ``` You can then prompt it with anything you'd like. ### Transformers To make the best use of our model with transformers make sure to have [installed](https://github.com/mistralai/mistral-common) ` mistral-common >= 1.5.5` to use our tokenizer. ```bash pip install mistral-common --upgrade ``` Then load our tokenizer along with the model and generate: ```python import torch from mistral_common.protocol.instruct.messages import ( SystemMessage, UserMessage ) from mistral_common.protocol.instruct.request import ChatCompletionRequest from mistral_common.tokens.tokenizers.mistral import MistralTokenizer from mistral_common.tokens.tokenizers.tekken import SpecialTokenPolicy from huggingface_hub import hf_hub_download from transformers import AutoModelForCausalLM def load_system_prompt(repo_id: str, filename: str) -> str: file_path = hf_hub_download(repo_id=repo_id, filename=filename) with open(file_path, "r") as file: system_prompt = file.read() return system_prompt model_id = "mistralai/Devstral-Small-2505" tekken_file = hf_hub_download(repo_id=model_id, filename="tekken.json") SYSTEM_PROMPT = load_system_prompt(model_id, "SYSTEM_PROMPT.txt") tokenizer = MistralTokenizer.from_file(tekken_file) model = AutoModelForCausalLM.from_pretrained(model_id) tokenized = tokenizer.encode_chat_completion( ChatCompletionRequest( messages=[ SystemMessage(content=SYSTEM_PROMPT), UserMessage(content="<your-command>"), ], ) ) output = model.generate( input_ids=torch.tensor([tokenized.tokens]), max_new_tokens=1000, )[0] decoded_output = tokenizer.decode(output[len(tokenized.tokens):]) print(decoded_output) ``` ### LMStudio Download the weights from huggingface: ``` pip install -U "huggingface_hub[cli]" huggingface-cli download \ "mistralai/Devstral-Small-2505_gguf" \ --include "devstralQ4_K_M.gguf" \ --local-dir "mistralai/Devstral-Small-2505_gguf/" ``` You can serve the model locally with [LMStudio](https://lmstudio.ai/). * Download [LM Studio](https://lmstudio.ai/) and install it * Install `lms cli ~/.lmstudio/bin/lms bootstrap` * In a bash terminal, run `lms import devstralQ4_K_M.gguf` in the directory where you've downloaded the model checkpoint (e.g. `mistralai/Devstral-Small-2505_gguf`) * Open the LMStudio application, click the terminal icon to get into the developer tab. Click select a model to load and select Devstral Q4 K M. Toggle the status button to start the model, in setting toggle Serve on Local Network to be on. * On the right tab, you will see an API identifier which should be devstralq4_k_m and an api address under API Usage. Keep note of this address, we will use it in the next step. Launch Openhands You can now interact with the model served from LM Studio with openhands. Start the openhands server with the docker ```bash docker pull docker.all-hands.dev/all-hands-ai/runtime:0.38-nikolaik docker run -it --rm --pull=always \ -e SANDBOX_RUNTIME_CONTAINER_IMAGE=docker.all-hands.dev/all-hands-ai/runtime:0.38-nikolaik \ -e LOG_ALL_EVENTS=true \ -v /var/run/docker.sock:/var/run/docker.sock \ -v ~/.openhands-state:/.openhands-state \ -p 3000:3000 \ --add-host host.docker.internal:host-gateway \ --name openhands-app \ docker.all-hands.dev/all-hands-ai/openhands:0.38 ``` Click “see advanced setting” on the second line. In the new tab, toggle advanced to on. Set the custom model to be mistral/devstralq4_k_m and Base URL the api address we get from the last step in LM Studio. Set API Key to dummy. Click save changes. ### Ollama You can run Devstral using the [Ollama](https://ollama.ai/) CLI. ```bash ollama run devstral ```
ArtusDev/mistralai_Devstral-Small-2505_EXL3_5.0bpw_H6
ArtusDev
2025-05-21T21:33:53Z
0
0
vllm
[ "vllm", "safetensors", "mistral", "text2text-generation", "en", "fr", "de", "es", "pt", "it", "ja", "ko", "ru", "zh", "ar", "fa", "id", "ms", "ne", "pl", "ro", "sr", "sv", "tr", "uk", "vi", "hi", "bn", "base_model:mistralai/Devstral-Small-2505", "base_model:quantized:mistralai/Devstral-Small-2505", "license:apache-2.0", "5-bit", "exl3", "region:us" ]
text2text-generation
2025-05-21T18:08:30Z
--- language: - en - fr - de - es - pt - it - ja - ko - ru - zh - ar - fa - id - ms - ne - pl - ro - sr - sv - tr - uk - vi - hi - bn license: apache-2.0 library_name: vllm inference: false base_model: - mistralai/Devstral-Small-2505 base_model_relation: quantized quantized_by: ArtusDev extra_gated_description: >- If you want to learn more about how we process your personal data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>. pipeline_tag: text2text-generation --- # Devstral-Small-2505 Devstral is an agentic LLM for software engineering tasks built under a collaboration between [Mistral AI](https://mistral.ai/) and [All Hands AI](https://www.all-hands.dev/) 🙌. Devstral excels at using tools to explore codebases, editing multiple files and power software engineering agents. The model achieves remarkable performance on SWE-bench which positionates it as the #1 open source model on this [benchmark](#benchmark-results). It is finetuned from [Mistral-Small-3.1](https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Base-2503), therefore it has a long context window of up to 128k tokens. As a coding agent, Devstral is text-only and before fine-tuning from `Mistral-Small-3.1` the vision encoder was removed. For enterprises requiring specialized capabilities (increased context, domain-specific knowledge, etc.), we will release commercial models beyond what Mistral AI contributes to the community. Learn more about Devstral in our [blog post](https://mistral.ai/news/devstral). ## Key Features: - **Agentic coding**: Devstral is designed to excel at agentic coding tasks, making it a great choice for software engineering agents. - **lightweight**: with its compact size of just 24 billion parameters, Devstral is light enough to run on a single RTX 4090 or a Mac with 32GB RAM, making it an appropriate model for local deployment and on-device use. - **Apache 2.0 License**: Open license allowing usage and modification for both commercial and non-commercial purposes. - **Context Window**: A 128k context window. - **Tokenizer**: Utilizes a Tekken tokenizer with a 131k vocabulary size. ## Benchmark Results ### SWE-Bench Devstral achieves a score of 46.8% on SWE-Bench Verified, outperforming prior open-source SoTA by 6%. | Model | Scaffold | SWE-Bench Verified (%) | |------------------|--------------------|------------------------| | Devstral | OpenHands Scaffold | **46.8** | | GPT-4.1-mini | OpenAI Scaffold | 23.6 | | Claude 3.5 Haiku | Anthropic Scaffold | 40.6 | | SWE-smith-LM 32B | SWE-agent Scaffold | 40.2 | When evaluated under the same test scaffold (OpenHands, provided by All Hands AI 🙌), Devstral exceeds far larger models such as Deepseek-V3-0324 and Qwen3 232B-A22B. ![SWE Benchmark](assets/swe_bench.png) ## Usage We recommend to use Devstral with the [OpenHands](https://github.com/All-Hands-AI/OpenHands/tree/main) scaffold. You can use it either through our API or by running locally. ### API Follow these [instructions](https://docs.mistral.ai/getting-started/quickstart/#account-setup) to create a Mistral account and get an API key. Then run these commands to start the OpenHands docker container. ```bash export MISTRAL_API_KEY=<MY_KEY> docker pull docker.all-hands.dev/all-hands-ai/runtime:0.39-nikolaik mkdir -p ~/.openhands-state && echo '{"language":"en","agent":"CodeActAgent","max_iterations":null,"security_analyzer":null,"confirmation_mode":false,"llm_model":"mistral/devstral-small-2505","llm_api_key":"'$MISTRAL_API_KEY'","remote_runtime_resource_factor":null,"github_token":null,"enable_default_condenser":true}' > ~/.openhands-state/settings.json docker run -it --rm --pull=always \ -e SANDBOX_RUNTIME_CONTAINER_IMAGE=docker.all-hands.dev/all-hands-ai/runtime:0.39-nikolaik \ -e LOG_ALL_EVENTS=true \ -v /var/run/docker.sock:/var/run/docker.sock \ -v ~/.openhands-state:/.openhands-state \ -p 3000:3000 \ --add-host host.docker.internal:host-gateway \ --name openhands-app \ docker.all-hands.dev/all-hands-ai/openhands:0.39 ``` ### Local inference The model can also be deployed with the following libraries: - [`vllm (recommended)`](https://github.com/vllm-project/vllm): See [here](#vllm-recommended) - [`mistral-inference`](https://github.com/mistralai/mistral-inference): See [here](#mistral-inference) - [`transformers`](https://github.com/huggingface/transformers): See [here](#transformers) - [`LMStudio`](https://lmstudio.ai/): See [here](#lmstudio) - [`ollama`](https://github.com/ollama/ollama): See [here](#ollama) ### OpenHands (recommended) #### Launch a server to deploy Devstral-Small-2505 Make sure you launched an OpenAI-compatible server such as vLLM or Ollama as described above. Then, you can use OpenHands to interact with `Devstral-Small-2505`. In the case of the tutorial we spineed up a vLLM server running the command: ```bash vllm serve mistralai/Devstral-Small-2505 --tokenizer_mode mistral --config_format mistral --load_format mistral --tool-call-parser mistral --enable-auto-tool-choice --tensor-parallel-size 2 ``` The server address should be in the following format: `http://<your-server-url>:8000/v1` #### Launch OpenHands You can follow installation of OpenHands [here](https://docs.all-hands.dev/modules/usage/installation). The easiest way to launch OpenHands is to use the Docker image: ```bash docker pull docker.all-hands.dev/all-hands-ai/runtime:0.38-nikolaik docker run -it --rm --pull=always \ -e SANDBOX_RUNTIME_CONTAINER_IMAGE=docker.all-hands.dev/all-hands-ai/runtime:0.38-nikolaik \ -e LOG_ALL_EVENTS=true \ -v /var/run/docker.sock:/var/run/docker.sock \ -v ~/.openhands-state:/.openhands-state \ -p 3000:3000 \ --add-host host.docker.internal:host-gateway \ --name openhands-app \ docker.all-hands.dev/all-hands-ai/openhands:0.38 ``` Then, you can access the OpenHands UI at `http://localhost:3000`. #### Connect to the server When accessing the OpenHands UI, you will be prompted to connect to a server. You can use the advanced mode to connect to the server you launched earlier. Fill the following fields: - **Custom Model**: `openai/mistralai/Devstral-Small-2505` - **Base URL**: `http://<your-server-url>:8000/v1` - **API Key**: `token` (or any other token you used to launch the server if any) #### Use OpenHands powered by Devstral Now you're good to use Devstral Small inside OpenHands by **starting a new conversation**. Let's build a To-Do list app. <details> <summary>To-Do list app</summary 1. Let's ask Devstral to generate the app with the following prompt: ```txt Build a To-Do list app with the following requirements: - Built using FastAPI and React. - Make it a one page app that: - Allows to add a task. - Allows to delete a task. - Allows to mark a task as done. - Displays the list of tasks. - Store the tasks in a SQLite database. ``` ![Agent prompting](assets/tuto_open_hands/agent_prompting.png) 2. Let's see the result You should see the agent construct the app and be able to explore the code it generated. If it doesn't do it automatically, ask Devstral to deploy the app or do it manually, and then go the front URL deployment to see the app. ![Agent working](assets/tuto_open_hands/agent_working.png) ![App UI](assets/tuto_open_hands/app_ui.png) 3. Iterate Now that you have a first result you can iterate on it by asking your agent to improve it. For example, in the app generated we could click on a task to mark it checked but having a checkbox would improve UX. You could also ask it to add a feature to edit a task, or to add a feature to filter the tasks by status. Enjoy building with Devstral Small and OpenHands! </details> ### vLLM (recommended) We recommend using this model with the [vLLM library](https://github.com/vllm-project/vllm) to implement production-ready inference pipelines. **_Installation_** Make sure you install [`vLLM >= 0.8.5`](https://github.com/vllm-project/vllm/releases/tag/v0.8.5): ``` pip install vllm --upgrade ``` Doing so should automatically install [`mistral_common >= 1.5.5`](https://github.com/mistralai/mistral-common/releases/tag/v1.5.5). To check: ``` python -c "import mistral_common; print(mistral_common.__version__)" ``` You can also make use of a ready-to-go [docker image](https://github.com/vllm-project/vllm/blob/main/Dockerfile) or on the [docker hub](https://hub.docker.com/layers/vllm/vllm-openai/latest/images/sha256-de9032a92ffea7b5c007dad80b38fd44aac11eddc31c435f8e52f3b7404bbf39). #### Server We recommand that you use Devstral in a server/client setting. 1. Spin up a server: ``` vllm serve mistralai/Devstral-Small-2505 --tokenizer_mode mistral --config_format mistral --load_format mistral --tool-call-parser mistral --enable-auto-tool-choice --tensor-parallel-size 2 ``` 2. To ping the client you can use a simple Python snippet. ```py import requests import json from huggingface_hub import hf_hub_download url = "http://<your-server-url>:8000/v1/chat/completions" headers = {"Content-Type": "application/json", "Authorization": "Bearer token"} model = "mistralai/Devstral-Small-2505" def load_system_prompt(repo_id: str, filename: str) -> str: file_path = hf_hub_download(repo_id=repo_id, filename=filename) with open(file_path, "r") as file: system_prompt = file.read() return system_prompt SYSTEM_PROMPT = load_system_prompt(model, "SYSTEM_PROMPT.txt") messages = [ {"role": "system", "content": SYSTEM_PROMPT}, { "role": "user", "content": [ { "type": "text", "text": "<your-command>", }, ], }, ] data = {"model": model, "messages": messages, "temperature": 0.15} response = requests.post(url, headers=headers, data=json.dumps(data)) print(response.json()["choices"][0]["message"]["content"]) ``` ### Mistral-inference We recommend using mistral-inference to quickly try out / "vibe-check" Devstral. #### Install Make sure to have mistral_inference >= 1.6.0 installed. ```bash pip install mistral_inference --upgrade ``` #### Download ```python from huggingface_hub import snapshot_download from pathlib import Path mistral_models_path = Path.home().joinpath('mistral_models', 'Devstral') mistral_models_path.mkdir(parents=True, exist_ok=True) snapshot_download(repo_id="mistralai/Devstral-Small-2505", allow_patterns=["params.json", "consolidated.safetensors", "tekken.json"], local_dir=mistral_models_path) ``` #### Python You can run the model using the following command: ```bash mistral-chat $HOME/mistral_models/Devstral --instruct --max_tokens 300 ``` You can then prompt it with anything you'd like. ### Transformers To make the best use of our model with transformers make sure to have [installed](https://github.com/mistralai/mistral-common) ` mistral-common >= 1.5.5` to use our tokenizer. ```bash pip install mistral-common --upgrade ``` Then load our tokenizer along with the model and generate: ```python import torch from mistral_common.protocol.instruct.messages import ( SystemMessage, UserMessage ) from mistral_common.protocol.instruct.request import ChatCompletionRequest from mistral_common.tokens.tokenizers.mistral import MistralTokenizer from mistral_common.tokens.tokenizers.tekken import SpecialTokenPolicy from huggingface_hub import hf_hub_download from transformers import AutoModelForCausalLM def load_system_prompt(repo_id: str, filename: str) -> str: file_path = hf_hub_download(repo_id=repo_id, filename=filename) with open(file_path, "r") as file: system_prompt = file.read() return system_prompt model_id = "mistralai/Devstral-Small-2505" tekken_file = hf_hub_download(repo_id=model_id, filename="tekken.json") SYSTEM_PROMPT = load_system_prompt(model_id, "SYSTEM_PROMPT.txt") tokenizer = MistralTokenizer.from_file(tekken_file) model = AutoModelForCausalLM.from_pretrained(model_id) tokenized = tokenizer.encode_chat_completion( ChatCompletionRequest( messages=[ SystemMessage(content=SYSTEM_PROMPT), UserMessage(content="<your-command>"), ], ) ) output = model.generate( input_ids=torch.tensor([tokenized.tokens]), max_new_tokens=1000, )[0] decoded_output = tokenizer.decode(output[len(tokenized.tokens):]) print(decoded_output) ``` ### LMStudio Download the weights from huggingface: ``` pip install -U "huggingface_hub[cli]" huggingface-cli download \ "mistralai/Devstral-Small-2505_gguf" \ --include "devstralQ4_K_M.gguf" \ --local-dir "mistralai/Devstral-Small-2505_gguf/" ``` You can serve the model locally with [LMStudio](https://lmstudio.ai/). * Download [LM Studio](https://lmstudio.ai/) and install it * Install `lms cli ~/.lmstudio/bin/lms bootstrap` * In a bash terminal, run `lms import devstralQ4_K_M.gguf` in the directory where you've downloaded the model checkpoint (e.g. `mistralai/Devstral-Small-2505_gguf`) * Open the LMStudio application, click the terminal icon to get into the developer tab. Click select a model to load and select Devstral Q4 K M. Toggle the status button to start the model, in setting toggle Serve on Local Network to be on. * On the right tab, you will see an API identifier which should be devstralq4_k_m and an api address under API Usage. Keep note of this address, we will use it in the next step. Launch Openhands You can now interact with the model served from LM Studio with openhands. Start the openhands server with the docker ```bash docker pull docker.all-hands.dev/all-hands-ai/runtime:0.38-nikolaik docker run -it --rm --pull=always \ -e SANDBOX_RUNTIME_CONTAINER_IMAGE=docker.all-hands.dev/all-hands-ai/runtime:0.38-nikolaik \ -e LOG_ALL_EVENTS=true \ -v /var/run/docker.sock:/var/run/docker.sock \ -v ~/.openhands-state:/.openhands-state \ -p 3000:3000 \ --add-host host.docker.internal:host-gateway \ --name openhands-app \ docker.all-hands.dev/all-hands-ai/openhands:0.38 ``` Click “see advanced setting” on the second line. In the new tab, toggle advanced to on. Set the custom model to be mistral/devstralq4_k_m and Base URL the api address we get from the last step in LM Studio. Set API Key to dummy. Click save changes. ### Ollama You can run Devstral using the [Ollama](https://ollama.ai/) CLI. ```bash ollama run devstral ```
nezamisafa/whisper-large-v3-turbo-fa-c13-avs
nezamisafa
2025-05-21T21:01:50Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "fa", "dataset:mozilla-foundation/common_voice_13_0", "base_model:openai/whisper-large-v3-turbo", "base_model:finetune:openai/whisper-large-v3-turbo", "license:mit", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-05-20T04:09:48Z
--- library_name: transformers language: - fa license: mit base_model: openai/whisper-large-v3-turbo tags: - generated_from_trainer datasets: - mozilla-foundation/common_voice_13_0 metrics: - wer model-index: - name: 'whisper-large-v3-turbo-fa-c13-avs ' results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 13.0 type: mozilla-foundation/common_voice_13_0 config: fa split: None args: 'config: fa, split: test' metrics: - name: Wer type: wer value: 27.926705588904422 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-large-v3-turbo-fa-c13-avs This model is a fine-tuned version of [openai/whisper-large-v3-turbo](https://huggingface.co/openai/whisper-large-v3-turbo) on the Common Voice 13.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.2687 - Wer: 27.9267 ## Model description ![image/png](https://cdn-uploads.huggingface.co/production/uploads/67924ef9ffafde6ac6070a33/QLq3z1fK1OlMwx2APNqB-.png) ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 5000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:-------:| | 0.2167 | 0.4160 | 1000 | 0.4101 | 39.1360 | | 0.1643 | 0.8319 | 2000 | 0.3560 | 34.6257 | | 0.0874 | 1.2479 | 3000 | 0.3249 | 32.9962 | | 0.0873 | 1.6639 | 4000 | 0.2836 | 29.3890 | | 0.0421 | 2.0799 | 5000 | 0.2687 | 27.9267 | ### Framework versions - Transformers 4.52.1 - Pytorch 2.7.0+cu126 - Datasets 3.6.0 - Tokenizers 0.21.1 ### Notes - Amir Nezami safa - Vahid Mahmodiyan - Shahab Salehi
ThomasTheMaker/Llama3.1-6B-ReplaceMe-Healed-rkllm-v1.2.0
ThomasTheMaker
2025-05-21T20:52:45Z
0
0
null
[ "llama", "arxiv:2505.02819", "license:apache-2.0", "region:us" ]
null
2025-05-21T20:20:00Z
--- license: apache-2.0 --- # ReplaceMe: Training-Free Transformer Pruning via Layer Removal & Linear Transformations [![arXiv](https://img.shields.io/badge/arXiv-2310.12345-b31b1b.svg)](https://arxiv.org/abs/2505.02819) [![License](https://img.shields.io/badge/License-Apache_2.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) ![ReplaceMe Logo](./figs/logo2.jpg) ## Model Description ReplaceMe is a novel method for transformer model compression that enables **training-free** block/layer pruning while maintaining model performance through linear transformations. The approach: - Identifies and removes block of layers - Applies mathematically-derived transformations to preserve information flow - Requires no fine-tuning or retraining - Works with standard transformer architectures (The LTs are merged with the original model weights) ## Key Features - 🚀 **Zero-Training Pruning**: Remove layers without any fine-tuning - 🧠 **Performance Preservation**: <8% accuracy drop in most cases - ⚡ **Instant Speedup**: less blocks -> faster inference + less memory - 🔌 **Plug-and-Play**: Works with existing HuggingFace models ## 🔥 Performance Comparison of Pruning Methods (Llama 2 7B, 25% Compression) | Method | num_pruned_layers | Dataset | State | race 🏁 | winogrande 🎲 | piqa 🧠 | boolq ❓ | openbookqa 📖 | sciq 🔬 | lambada_openai 🦙 | ppl | Avg-acc 📊 | |-----------------------|-------------------|------------|---------------|--------|--------------|--------|---------|--------------|--------|------------------|-----------|------------| | | | | | acc | acc | acc_norm | acc | acc_norm | acc_norm | acc | | | | **Llama 3.1** (baseline) | - | - | - | 0.450 | 0.779 | 0.810 | 0.842 | 0.430 | 0.961 | 0.732 | 3.404 | **0.712** | | **UIDL*** | 8 | slim_orca | no training | 0.341 | 0.719 | 0.690 | 0.773 | 0.310 | 0.719 | 0.087 | 932.000 | 0.592 | | **ReplaceMe** (Ours) ✅ | 8 | slim_orca | no training | 0.406 | **0.742** 🏆 | 0.706 | 0.830 | 0.338 | 0.901 | 0.471 | 16.760 | 0.654 | | **ReplaceMe** (Ours) ❌ | 8 | slim_orca | SFT | **0.431** 🏆 | 0.716 | **0.728** 🏆 | **0.849** 🏆 | **0.378** 🏆 | **0.912** 🏆 | **0.697** 🏆 | 4.04 🏆 | **0.669** 🏆 | **Key:** - 🏆 Best performance in column - ✅ Training-free (our methods) - ❌ Requires training **Metrics Explained:** - **Bold**: Best training-free results - All numbers are accuracy scores > 🔥 **Our Healed model can acheive 94.0% of baseline performance after healing on 1B tokens!** ## Installation ```bash pip install replaceme # or git clone https://github.com/mts-ai/ReplaceMe cd ReplaceMe pip install -e . ``` ## Basic Usage ``` # LSTSQ method (recommended) run_replaceme --config ./reproduce/Replace_Me_pipeline_lstsq.yaml # Cosine similarity method run_replaceme --config ./reproduce/Replace_Me_pipeline_cosine.yaml ``` There are many parameters you can play with, visit our repo and dscover 🔥🔥 ## Load Model As we said we are merging the LTs with the original transformer architecture so you just do it as usual ```python ## EXAMPLE from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "MTSAIR/Llama3.1-6B-ReplaceMe-Healed" model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained(model_name) prompt = "What is ReplaceME pruning method?!" messages = [ {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) output = model.generate( **model_inputs, max_new_tokens=512 ) response = tokenizer.batch_decode(output, skip_special_tokens=True)[0] ``` # Citation If you use ReplaceMe in your research, please cite our paper: ```bibtex @article{shopkhoev2025replaceme0, title = {ReplaceMe: Network Simplification via Layer Pruning and Linear Transformations}, author = {Dmitriy Shopkhoev and Ammar Ali and Magauiya Zhussip and Valentin Malykh and Stamatios Lefkimmiatis and Nikos Komodakis and Sergey Zagoruyko}, year = {2025}, journal = {arXiv preprint arXiv: 2505.02819} } ```
haihp02/1ad156f8-03d0-44ab-9d9a-7dd51b303128-phase1-adapter
haihp02
2025-05-21T17:14:14Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:unsloth/Phi-3.5-mini-instruct", "base_model:finetune:unsloth/Phi-3.5-mini-instruct", "endpoints_compatible", "region:us" ]
null
2025-05-21T16:04:11Z
--- base_model: unsloth/Phi-3.5-mini-instruct library_name: transformers model_name: 1ad156f8-03d0-44ab-9d9a-7dd51b303128-phase1-adapter tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for 1ad156f8-03d0-44ab-9d9a-7dd51b303128-phase1-adapter This model is a fine-tuned version of [unsloth/Phi-3.5-mini-instruct](https://huggingface.co/unsloth/Phi-3.5-mini-instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="haihp02/1ad156f8-03d0-44ab-9d9a-7dd51b303128-phase1-adapter", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/trunghainguyenhp02/sn56-sft-before-dpo-train/runs/ndb077vl) This model was trained with SFT. ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.7.0+cu126 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
iancu003/climateSentiment3
iancu003
2025-05-21T17:01:49Z
0
0
fastai
[ "fastai", "region:us" ]
null
2025-05-21T16:34:04Z
--- tags: - fastai --- # Amazing! 🥳 Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)! Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
OL-OL/llama3-vision-graphs-5000_finetuned
OL-OL
2025-05-21T16:43:43Z
0
0
transformers
[ "transformers", "safetensors", "mllama", "image-text-to-text", "trl", "sft", "conversational", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
image-text-to-text
2025-05-21T16:41:17Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
rubricreward/R3-Phi-4-reasoning-plus-4k
rubricreward
2025-05-21T16:40:10Z
2
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "conversational", "en", "dataset:rubricreward/R3-Dataset-4K", "arxiv:2505.13388", "base_model:microsoft/Phi-4-reasoning-plus", "base_model:finetune:microsoft/Phi-4-reasoning-plus", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-15T03:58:25Z
--- license: apache-2.0 language: - en datasets: - rubricreward/R3-Dataset-4K base_model: - microsoft/Phi-4-reasoning-plus pipeline_tag: text-generation library_name: transformers --- <img alt="R3 Logo" src="https://cdn-avatars.huggingface.co/v1/production/uploads/651803f834c26962535eb022/hj3UEN9_9wlkmvMfUY1OL.png" width="150px"> # R3-Phi-4-reasoning-plus-4k R3-Phi-4-reasoning-plus-4k is part of the R3 family, a series of **R**obust **R**ubric-Agnostic **R**eward Models. We perform SFT on the Qwen3 model family on the 4B, 8B, and 14B scales as well as on Phi-4-reasoning plus. Check out [our paper](https://arxiv.org/abs/2505.13388) for more information! ## Model description - **Model type:** A reward model trained on a curated R3 dataset collected from 45 diverse sources that covers tasks such as classification, preference optimization, and question answering. Each example in the dataset contains an instruction and task description, input, response(s), evaluation rubrics, and a score along with the corresponding reasoning. - **Language(s) (NLP):** English - **License:** Apache 2.0 - **Finetuned from model:** microsoft/Phi-4-reasoning-plus ### Model Sources - **Project Page:** https://rubricreward.github.io - **Repository:** https://github.com/rubricreward/r3 - **Paper:** https://arxiv.org/abs/2505.13388 ## Using the Model ```python from transformers import AutoTokenizer from vllm import LLM, SamplingParams model_path = "rubricreward/R3-Phi-4-reasoning-plus-4k" tokenizer = AutoTokenizer.from_pretrained(model_path) sampling_params = SamplingParams(temperature=0.8, top_p=0.95, max_tokens=32768, min_p=0, top_k=50) llm = LLM( model=model_path, dtype="bfloat16", max_model_len=10000, tensor_parallel_size=2, gpu_memory_utilization=0.9, enforce_eager=True, ) messages: list[dict[str, str]] = [ {'content': "Evaluate the response based on the given task, input, response, and evaluation rubric. Provide a fair and detailed assessment following the rubric...", 'role': 'user'} ] list_text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, enable_thinking=True # Switch between thinking and non-thinking modes. ) outputs = llm.generate(list_text, sampling_params) ``` ## License and use R3 is licensed under the Apache 2.0 license. ## Citation ```bibtex @article{anugraha2025r3, title={R3: Robust Rubric-Agnostic Reward Models}, author={Anugraha, David and Tang, Zilu and Miranda, Lester James V. and Zhao, Hanyang and Farhansyah, Mohammad Rifqi and Kuwanto, Garry and Wijaya, Derry and Winata, Genta Indra}, journal={arXiv preprint arXiv:2505.13388}, year={2025} } ```
manuth/wer7_augPitch
manuth
2025-05-21T16:36:11Z
0
0
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "khm", "base_model:openai/whisper-large-v3-turbo", "base_model:finetune:openai/whisper-large-v3-turbo", "license:mit", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-05-21T16:31:02Z
--- library_name: transformers pipeline_tag: automatic-speech-recognition language: - khm license: mit base_model: openai/whisper-large-v3-turbo tags: - generated_from_trainer metrics: - wer model-index: - name: Whisper Finetuned for Khmer results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Finetuned for Khmer This model is a fine-tuned version of [openai/whisper-large-v3-turbo](https://huggingface.co/openai/whisper-large-v3-turbo) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0204 - Wer: 0.0998 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:------:| | 0.0252 | 0.7722 | 200 | 0.0256 | 0.1182 | | 0.0193 | 1.5444 | 400 | 0.0219 | 0.1037 | | 0.0099 | 2.3166 | 600 | 0.0204 | 0.0998 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 2.14.4 - Tokenizers 0.21.1
EmreGed/sunergy8bit8e
EmreGed
2025-05-21T16:34:55Z
0
0
null
[ "gguf", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-21T16:30:37Z
--- license: apache-2.0 ---
NicholasYYY/yolov1o-aquascape
NicholasYYY
2025-05-21T16:31:39Z
0
0
null
[ "tflite", "Aquascape", "YOLO", "Classification", "en", "arxiv:1910.09700", "region:us" ]
null
2025-05-21T15:34:37Z
--- language: - en base_model: - Ultralytics/YOLOv10 tags: - Aquascape - YOLO - Classification --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
bigfish951/wav2vec2-base-timit-demo-colab
bigfish951
2025-05-21T16:02:47Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-21T16:02:46Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
iancu003/climateSentiment2
iancu003
2025-05-21T15:51:06Z
0
0
fastai
[ "fastai", "region:us" ]
null
2025-05-21T15:50:50Z
--- tags: - fastai --- # Amazing! 🥳 Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)! Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
unity/inference-engine-othello
unity
2025-05-21T15:42:30Z
11
2
unity-sentis
[ "unity-sentis", "onnx", "unity-inference-engine", "reinforcement-learning", "license:mit", "region:us" ]
reinforcement-learning
2024-01-10T03:27:48Z
--- license: mit library_name: unity-sentis pipeline_tag: reinforcement-learning tags: - unity-inference-engine --- ## Othello game playing model in Unity 6 with Inference Engine This is an Othello game playing model based on a modified version of Alpha Go called [Alpha Zero General](https://github.com/suragnair/alpha-zero-general) ## How to Use Example source code to run this model can be found at: [Source Code](https://github.com/Unity-Technologies/inference-engine-samples/tree/main/BoardGameAISample) ![preview](othello-preview.png) ## Inference Engine Inference Engine is a neural network inference library for Unity. Find out more [here](https://docs.unity3d.com/Packages/com.unity.ai.inference@latest).
duythanh1022/finetune-clip-flickr8
duythanh1022
2025-05-21T15:20:29Z
5
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:ybelkada/blip2-opt-2.7b-fp16-sharded", "base_model:adapter:ybelkada/blip2-opt-2.7b-fp16-sharded", "region:us" ]
null
2025-05-20T15:22:06Z
--- library_name: peft base_model: ybelkada/blip2-opt-2.7b-fp16-sharded tags: - generated_from_trainer model-index: - name: finetune-clip-flickr8 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetune-clip-flickr8 This model is a fine-tuned version of [ybelkada/blip2-opt-2.7b-fp16-sharded](https://huggingface.co/ybelkada/blip2-opt-2.7b-fp16-sharded) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.7363 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.8236 | 0.5650 | 1000 | 1.7750 | | 1.7656 | 1.1299 | 2000 | 1.7508 | | 1.7662 | 1.6949 | 3000 | 1.7363 | ### Framework versions - PEFT 0.15.2.dev0 - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
scanton/MNLP_M2_document_encoder
scanton
2025-05-21T14:56:45Z
0
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "feature-extraction", "arxiv:1910.09700", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2025-05-21T13:01:16Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
johngreendr1/6b03793c-4655-4378-8426-52b947643eea
johngreendr1
2025-05-21T14:43:49Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:unsloth/mistral-7b-instruct-v0.2", "base_model:adapter:unsloth/mistral-7b-instruct-v0.2", "region:us" ]
null
2025-05-21T13:47:51Z
--- base_model: unsloth/mistral-7b-instruct-v0.2 library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.1
TIGER-Lab/General-Reasoner-Qwen2.5-7B
TIGER-Lab
2025-05-21T13:21:53Z
11,275
2
null
[ "safetensors", "qwen2", "General-Reasoner-7B", "zho", "eng", "fra", "spa", "por", "deu", "ita", "rus", "jpn", "kor", "vie", "tha", "ara", "arxiv:2505.14652", "base_model:Qwen/Qwen2.5-7B", "base_model:finetune:Qwen/Qwen2.5-7B", "license:apache-2.0", "region:us" ]
null
2025-04-06T20:32:08Z
--- license: apache-2.0 language: - zho - eng - fra - spa - por - deu - ita - rus - jpn - kor - vie - tha - ara base_model: - Qwen/Qwen2.5-7B tags: - General-Reasoner-7B --- # General-Reasoner: Advancing LLM Reasoning Across All Domains <p align="center"> <a href="https://github.com/TIGER-AI-Lab/General-Reasoner" target="_blank">💻 Code</a> | <a href="https://arxiv.org/abs/2505.14652" target="_blank">📄 Paper</a> | <a href="https://huggingface.co/datasets/TIGER-Lab/WebInstruct-verified" target="_blank">📊 Dataset</a> | <a href="https://huggingface.co/collections/TIGER-Lab/general-reasoner-67fe9386e43e046489eac013" target="_blank">🤗 Model</a> | <a href="https://tiger-ai-lab.github.io/General-Reasoner/" target="_blank">🌐 Project Page</a> </p> ## Overview <p align="center"> <img src="https://tiger-ai-lab.github.io/General-Reasoner/static/images/teaser.png" alt="General-Reasoner Teaser" width="650"/> </p> <p align="center" style="font-style: italic; font-size: 0.95rem;"> <em> Figure: Effectiveness of <strong>General-Reasoner</strong> trained with diverse verifiable reasoning questions using model-based verifier compared to baseline methods on various reasoning tasks. </em> </p> **General-Reasoner** is a training paradigm for large language models (LLMs), designed to robustly enhance reasoning abilities across diverse domains—not just mathematics and coding, but also physics, chemistry, finance, humanities, and more. **Key features:** - **Zero RL Training:** Direct reinforcement learning from base LLMs, bypassing intermediate supervised stages. - **Diverse Reasoning Data:** 230K+ high-quality, verifiable questions sourced from the web and filtered for answer verifiability across disciplines. - **Model-Based Verifier:** Compact 1.5B generative verifier model for context-aware, chain-of-thought answer validation, outperforming traditional rule-based methods. **This specific model is the General-Reasoner variant trained based on [Qwen2.5-7B-Base](https://huggingface.co/Qwen/Qwen2.5-7B).** ## Main Results General-Reasoner outperforms base and supervised models on a variety of reasoning benchmarks, demonstrating robust generalization across domains: <p align="center"> <a href="https://github.com/TIGER-AI-Lab/General-Reasoner/raw/refs/heads/gh-pages/static/images/results_general.png" target="_blank"> <img src="https://github.com/TIGER-AI-Lab/General-Reasoner/raw/refs/heads/gh-pages/static/images/results_general.png" alt="Main Results" width="600"> </a> </p> ## Citation If you feel our work is helpful, please cite: ```bibtex @article{general-reasoner, title={{G}eneral-{R}easoner: Advancing LLM Reasoning Across All Domains}, author={Xueguang Ma and Qian Liu and Dongfu Jiang and Ge Zhang and Zejun Ma and Wenhu Chen}, year={2025}, journal={arXiv:2505.14652}, url={https://arxiv.org/abs/2505.14652} } ```
idontknowitsbrad/MaskSymmetryWork
idontknowitsbrad
2025-05-21T11:38:08Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-02T08:39:25Z
--- license: apache-2.0 --- best Mask2Former weights using the Jersey Royal Dataset best MRCNN model weights using the Jersey Royal Dataset Both Models are implemented in detectron2. Models should load into Mask R-CNN and Mask2Former implementations in detectron2 Test Results: Mask2Fromer: mAP 0.591, mAP50 0.832, mAP@75 0.617 MaskR-CNN: mAP 0.803, mAP50 0.917, mAP@75 0.845
spencer21/dreamOINT8
spencer21
2025-05-21T11:31:06Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-21T11:01:53Z
--- license: apache-2.0 ---
sergioalves/3cfcb4d1-0a3d-4b15-9c95-fabb431921e1
sergioalves
2025-05-21T11:08:13Z
0
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "llama", "text-generation", "generated_from_trainer", "axolotl", "dpo", "trl", "conversational", "arxiv:2305.18290", "base_model:upstage/SOLAR-10.7B-Instruct-v1.0", "base_model:quantized:upstage/SOLAR-10.7B-Instruct-v1.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-05-21T10:11:35Z
--- base_model: upstage/SOLAR-10.7B-Instruct-v1.0 library_name: transformers model_name: 3cfcb4d1-0a3d-4b15-9c95-fabb431921e1 tags: - generated_from_trainer - axolotl - dpo - trl licence: license --- # Model Card for 3cfcb4d1-0a3d-4b15-9c95-fabb431921e1 This model is a fine-tuned version of [upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="sergioalves/3cfcb4d1-0a3d-4b15-9c95-fabb431921e1", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dedok-yo/s56-7/runs/m8l3o1dl) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.12.0.dev0 - Transformers: 4.46.0 - Pytorch: 2.5.0+cu124 - Datasets: 3.0.1 - Tokenizers: 0.20.1 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
rkutyu/gjyg
rkutyu
2025-05-21T10:57:06Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-21T10:57:06Z
--- license: apache-2.0 ---
xw17/Phi-3-mini-4k-instruct_finetuned_3_optimized1_task_grouping_off_FT
xw17
2025-05-21T08:23:42Z
0
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "trl", "sft", "conversational", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-21T08:20:32Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
KayerShen/HempifiedCBDGummies
KayerShen
2025-05-21T06:19:42Z
0
0
null
[ "region:us" ]
null
2025-05-21T06:19:18Z
➥ ✅Shop Now - https://supplementcarts.com/hempified-cbd-gummies-buy/ ✔ Product Name — Hempified CBD Gummies ✔ Side Effects — No Major Side Effects ✔ Category — Health ✔ Results — In 1–2 Months ✔ Availability — Online ✔ Rating: — 5.0/5.0 ⭐⭐⭐⭐⭐ Introduction Hempified CBD Gummies In the modern era of wellness and natural healing, CBD gummies have emerged as a convenient and enjoyable way to harness the therapeutic power of cannabidiol (CBD). Among the many options on the market, Hempified CBD Gummies stand out for their high-quality formulation, effectiveness, and user-friendly design. This article takes an in-depth look at Hempified CBD Gummies, exploring their benefits, ingredients, how they work, and what sets them apart from the competition. ➥➥Get started today and see the difference Hempified CBD Gummies can make What Are Hempified CBD Gummies? Hempified CBD Gummies are edible, chewable supplements infused with CBD (Cannabidiol)—a non-psychoactive compound derived from the hemp plant. Unlike THC (tetrahydrocannabinol), CBD doesn’t cause a “high.” Instead, it offers a range of health benefits, such as reducing anxiety, alleviating pain, improving sleep, and supporting overall well-being. These gummies are crafted using premium hemp extract, are free from artificial additives, and offer a delicious and discreet way to incorporate CBD into your daily routine. Facebook link :- https://www.facebook.com/hempifiedcbdgummiesuse/ https://www.facebook.com/groups/hempifiedcbdgummiesuse/ https://www.facebook.com/events/582705698178462/ https://www.facebook.com/groups/hempifiedcbdgummiesonline https://www.facebook.com/events/1233684511647158/ https://www.facebook.com/groups/2025blissharmonycbdgummies https://www.facebook.com/events/678663904871199/ https://www.facebook.com/groups/hempified.cbd.gummies.reviews.2025 https://www.facebook.com/events/1031083615814502 https://healthquerys.com/hempified-cbd/ Read More Blogs:- https://knowt.com/flashcards/63d037b2-c5f0-4f13-8e4e-b984af8187f7?isNew=true https://fueler.io/healthcareu/why-hempified-cbd-gummies-are-taking-the-wellness-world-by-storm https://hempifiedcbdgummiesget.blogspot.com/2025/05/the-truth-about-hempified-cbd-gummies.html https://sites.google.com/view/hempified-c-bd-gummies-/home https://hempifiedcbdgummiesonline.mystrikingly.com/ https://hempified-cbd-gummies-27.jimdosite.com/ https://colab.research.google.com/drive/12O18MJB0PdD0v0U0QyBdOlcHrzaIbE18?usp=sharing https://sfero.me/podcast/-hempified-cbd-gummies-safe-natural https://lookerstudio.google.com/reporting/8fcb1be8-979c-4834-a32d-250c4c37e4e5 https://site-34mwsm78s.godaddysites.com/ https://form.jotform.com/KayerShen/cbd-made-easy-why-hempified-gummies https://www.eventcreate.com/e/hempified-cbd-gummies-730312 https://medium.com/@aravmishrak7/hempified-cbd-gummies-vs-other-brands-what-makes-them-better-406633cf07d6 https://groups.google.com/g/hempified-cbd-gummies-onilne/c/bAfNDsYVo6I https://issuetracker.google.com/issues/419057491 https://in.pinterest.com/pin/930908185605171345 https://hempifiedcbdgummiesnaturalhealingwithoutthe.quora.com/ https://hempifiedcbdgummiesonline.hashnode.dev/where-to-buy-hempified-cbd-gummies-trusted-source-and-discounts https://hempified-cbd-gummies-512c9e.webflow.io/ https://hempified-cbd-gummies4345.mywebselfsite.net/ https://wakelet.com/wake/og_O2rFuutDnM2tRZ5gjv https://www.zeffy.com/en-US/ticketing/hempified-cbd-gummies-natures-answer-to-stress-pain-and-sleep https://colab.research.google.com/drive/1ArkqyeO3r4c9Az6wfXTVm7mJgvfOfmvb?usp=sharing https://blog.rackons.in/hempified-cbd-gummies-side-effects-what-to-watch-out-for https://hempifiedcbdgummiesofficial2025.quora.com/ https://fueler.io/heath/hempified-cbd-gummies-reviews-crucial-user-warning-know-the-truth-before-buying https://hempified-cbd-gummies-reviews-da3ba7.webflow.io/ https://medium.com/@lopgoky/hempified-cbd-gummies-reviews-important-research-exposed-before-buy-1891fd5d027c https://www.commudle.com/labs/bliss-harmony-cbd-gummies-how-bliss-harmony-cbd-gummies-can-help-you-relax-after-a-long-day https://knowt.com/flashcards/662dcdae-ebd8-40c9-a93e-18eef1fe8bbf https://fueler.io/supplimentcartslive/bliss-harmony-cbd-gummies-are-bliss-harmony-gummies-safe-for-daily-use https://medium.com/@CBD-Gummies-With-Health/fact-check-is-hempified-cbd-gummies-really-associated-with-kevin-costner-find-out-everything-here-a3a4cb7f6562 https://soundcloud.com/cbd-gummies-with-health/hempified-cbd-gummies-my-honest-review-after-30-days-of-use
leilopzezica/zxvzxcv
leilopzezica
2025-05-21T05:30:58Z
0
0
null
[ "license:bigscience-openrail-m", "region:us" ]
null
2025-05-21T05:30:58Z
--- license: bigscience-openrail-m ---
Chang-Hoo/gemma-localize
Chang-Hoo
2025-05-21T05:30:35Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:google/gemma-3-1b-pt", "base_model:finetune:google/gemma-3-1b-pt", "endpoints_compatible", "region:us" ]
null
2025-05-15T08:41:46Z
--- base_model: google/gemma-3-1b-pt library_name: transformers model_name: gemma-localize tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for gemma-localize This model is a fine-tuned version of [google/gemma-3-1b-pt](https://huggingface.co/google/gemma-3-1b-pt). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Chang-Hoo/gemma-localize", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.3.2 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Triangle104/Qwen3-16B-A3B-abliterated-Q5_K_M-GGUF
Triangle104
2025-05-21T05:25:50Z
0
0
transformers
[ "transformers", "gguf", "chat", "abliterated", "uncensored", "llama-cpp", "gguf-my-repo", "text-generation", "base_model:huihui-ai/Qwen3-16B-A3B-abliterated", "base_model:quantized:huihui-ai/Qwen3-16B-A3B-abliterated", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-05-21T05:25:03Z
--- library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-30B-A3B/blob/main/LICENSE pipeline_tag: text-generation base_model: huihui-ai/Qwen3-16B-A3B-abliterated tags: - chat - abliterated - uncensored - llama-cpp - gguf-my-repo extra_gated_prompt: '**Usage Warnings** “**Risk of Sensitive or Controversial Outputs**“: This model’s safety filtering has been significantly reduced, potentially generating sensitive, controversial, or inappropriate content. Users should exercise caution and rigorously review generated outputs. “**Not Suitable for All Audiences**:“ Due to limited content filtering, the model’s outputs may be inappropriate for public settings, underage users, or applications requiring high security. “**Legal and Ethical Responsibilities**“: Users must ensure their usage complies with local laws and ethical standards. Generated content may carry legal or ethical risks, and users are solely responsible for any consequences. “**Research and Experimental Use**“: It is recommended to use this model for research, testing, or controlled environments, avoiding direct use in production or public-facing commercial applications. “**Monitoring and Review Recommendations**“: Users are strongly advised to monitor model outputs in real-time and conduct manual reviews when necessary to prevent the dissemination of inappropriate content. “**No Default Safety Guarantees**“: Unlike standard models, this model has not undergone rigorous safety optimization. huihui.ai bears no responsibility for any consequences arising from its use.' --- # Triangle104/Qwen3-16B-A3B-abliterated-Q5_K_M-GGUF This model was converted to GGUF format from [`huihui-ai/Qwen3-16B-A3B-abliterated`](https://huggingface.co/huihui-ai/Qwen3-16B-A3B-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/huihui-ai/Qwen3-16B-A3B-abliterated) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Qwen3-16B-A3B-abliterated-Q5_K_M-GGUF --hf-file qwen3-16b-a3b-abliterated-q5_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Qwen3-16B-A3B-abliterated-Q5_K_M-GGUF --hf-file qwen3-16b-a3b-abliterated-q5_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Qwen3-16B-A3B-abliterated-Q5_K_M-GGUF --hf-file qwen3-16b-a3b-abliterated-q5_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Qwen3-16B-A3B-abliterated-Q5_K_M-GGUF --hf-file qwen3-16b-a3b-abliterated-q5_k_m.gguf -c 2048 ```
MinaMila/phi3_unlearned_ug_e-5_1.0_0.15_0.05_LoRa_GermanCredit_cfda_ep8_55
MinaMila
2025-05-21T00:11:20Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-21T00:11:12Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Oceans-ID/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-scavenging_plump_armadillo
Oceans-ID
2025-05-20T06:13:02Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am scavenging plump armadillo", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-09T02:36:07Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-scavenging_plump_armadillo tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am scavenging plump armadillo - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-scavenging_plump_armadillo This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Oceans-ID/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-scavenging_plump_armadillo", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
TNSA/NGen2-170M
TNSA
2025-05-20T06:10:36Z
0
0
transformers
[ "transformers", "safetensors", "text-generation", "en", "hi", "te", "base_model:TNSA/NGen2-15M", "base_model:finetune:TNSA/NGen2-15M", "license:other", "endpoints_compatible", "region:us" ]
text-generation
2025-05-20T05:23:08Z
--- license: other license_name: ngen2-community-license license_link: https://tnsaai-builds.framer.website/community/licenses/ngen2 language: - en - hi - te metrics: - bleu - perplexity - accuracy base_model: - TNSA/NGen2-15M pipeline_tag: text-generation library_name: transformers model_type: safetensors new_version: TNSA/NGen3-15M --- # NGen 2 While using with transformers you can only use the 15M variant for now. NGen 2 is an advanced Transformer model training pipeline that supports multiple model variants. It ranges from a **nano** variant (approximately 120M parameters) to a **foundational** variant (approximately 1B parameters). The pipeline incorporates modern architectural improvements such as rotary positional embeddings, RMSNorm, and GEGLU activations to boost performance and training efficiency. > **Note:** Although NGen 2 is designed to train a 1B-parameter model, its advanced architecture pushes its performance closer to that of much larger models. Try using NGen3 for performance. ## Model Variants NGen 2 supports the following variants via the `--variant` flag: - **nano**: ~120M parameters - **small**: ~300M parameters - **medium**: ~500M parameters - **large**: ~700M parameters - **foundational**: ~1B parameters Each variant adjusts key hyperparameters such as the number of layers, model dimension (`d_model`), number of attention heads (`n_heads`), and the feed-forward dimension (`d_ff`). ## Requirements - Python 3.8+ - PyTorch - Transformers - Datasets - DeepSpeed (optional, for efficient training) - Azure ML SDK (for distributed training on Azure) Install dependencies using pip (adjust as needed): ```bash pip install torch transformers datasets deepspeed azureml-core ``` # Usage # 1. Data Preparation First, download and preprocess the OpenWebText dataset: ```bash python prepare.py --output_dir ./_data_ --max_length 4096 ``` This script downloads, tokenizes, and saves the dataset in Arrow format to the ./_data_ directory. # 2. Local Training The main training script is train.py. It loads the processed dataset (by default from ./_data_), instantiates the desired model variant, and starts training. Example CLI Commands - Train the nano (120M) variant: ```bash python train.py --dataset_dir ./_data_ --output_dir ./checkpoints_nano --batch_size 4 --epochs 3 --variant nano ``` - Train the small (300M) variant: ```bash python train.py --dataset_dir ./_data_ --output_dir ./checkpoints_small --batch_size 4 --epochs 3 --variant small ``` - Train the medium (500M) variant: ```bash python train.py --dataset_dir ./_data_ --output_dir ./checkpoints_medium --batch_size 4 --epochs 3 --variant medium ``` - Train the large (700M) variant: ```bash python train.py --dataset_dir ./_data_ --output_dir ./checkpoints_large --batch_size 4 --epochs 3 --variant large ``` - Train the foundational (1B) variant with rotary embeddings enabled: ```bash python train.py --dataset_dir ./_data_ --output_dir ./checkpoints_foundational --batch_size 4 --epochs 3 --variant foundational --use_rotary ``` # 3. Training on Azure ML - Step 1: Set Up Azure ML Resources Use ```azure_setup.py``` to create or connect to your Azure ML workspace and set up a compute cluster: ```bash python azure_setup.py \ --workspace_name MyWorkspace \ --resource_group MyResourceGroup \ --subscription_id YOUR_SUBSCRIPTION_ID \ --location eastus \ --compute_name gpu-cluster \ --vm_size Standard_NC6 \ --max_nodes 4 \ --min_nodes 0 ``` - Step 2: Submit a Training Job to Azure ML Use ```submit_train.py``` to submit your training script to Azure ML: ```bash python submit_train.py \ --experiment_name ngen3-experiment \ --compute_target gpu-cluster \ --script train.py \ --dataset_dir ./_data_ \ --output_dir ./checkpoints_foundational \ --batch_size 4 \ --epochs 3 \ --variant foundational \ --use_rotary ``` # 4. DeepSpeed Integration The deepspeed.json file configures mixed-precision training and ZeRO optimizations. To leverage DeepSpeed, ensure it is installed and adjust your training script or submission command to enable DeepSpeed support. # License License The NGen 2 project is developed and maintained by TNSA AI. The licensing model is dual: - The nano and small variants are open source and released under the MIT License. - The medium, large, and foundational variants are proprietary and are not open source. Use of these proprietary components is subject to TNSA AI's proprietary licensing terms. # Copyright © 2023 TNSA AI. All rights reserved. for Use read ```LICENCE``` in the LICENSE file
Improvetobe1/test
Improvetobe1
2025-05-20T00:23:57Z
0
0
null
[ "license:cc-by-nc-sa-4.0", "region:us" ]
null
2025-05-20T00:23:57Z
--- license: cc-by-nc-sa-4.0 ---
infogeo/91596536-633e-454f-82c2-a236ab7a2681
infogeo
2025-05-19T05:44:22Z
0
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "llama", "text-generation", "generated_from_trainer", "axolotl", "dpo", "trl", "conversational", "arxiv:2305.18290", "base_model:scb10x/llama-3-typhoon-v1.5-8b-instruct", "base_model:quantized:scb10x/llama-3-typhoon-v1.5-8b-instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-05-19T05:02:05Z
--- base_model: scb10x/llama-3-typhoon-v1.5-8b-instruct library_name: transformers model_name: 91596536-633e-454f-82c2-a236ab7a2681 tags: - generated_from_trainer - axolotl - dpo - trl licence: license --- # Model Card for 91596536-633e-454f-82c2-a236ab7a2681 This model is a fine-tuned version of [scb10x/llama-3-typhoon-v1.5-8b-instruct](https://huggingface.co/scb10x/llama-3-typhoon-v1.5-8b-instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="infogeo/91596536-633e-454f-82c2-a236ab7a2681", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dedok-yo/s56-28/runs/x71123u1) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.12.0.dev0 - Transformers: 4.46.0 - Pytorch: 2.5.0+cu124 - Datasets: 3.0.1 - Tokenizers: 0.20.1 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
palapotta/palapotta
palapotta
2025-05-19T00:03:54Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-19T00:03:54Z
--- license: apache-2.0 ---
Raniahossam33/gemma-2-9b-it-ditto-Egypt-food-Egypt-food
Raniahossam33
2025-05-18T23:45:37Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-18T23:45:19Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Epitech/gemma3-1b-sanket-finetuned-AlexisDanlos
Epitech
2025-05-18T22:38:27Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "unsloth", "trl", "grpo", "generated_from_trainer", "base_model:unsloth/gemma-3-1b-it", "base_model:adapter:unsloth/gemma-3-1b-it", "license:gemma", "region:us" ]
null
2025-05-18T22:38:20Z
--- library_name: peft license: gemma base_model: unsloth/gemma-3-1b-it tags: - unsloth - trl - grpo - generated_from_trainer model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [unsloth/gemma-3-1b-it](https://huggingface.co/unsloth/gemma-3-1b-it) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.15.2 - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
LarryAIDraw/shenhe_pony
LarryAIDraw
2025-05-18T17:22:13Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2025-05-18T08:05:39Z
--- license: creativeml-openrail-m --- https://civitai.com/models/154447/genshinxl-shenhe-2-outfits
polyglots/llama-3-8b-si-Pretrain-Writing-Style-Codeswitched50-10010
polyglots
2025-05-18T17:20:14Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b", "base_model:finetune:unsloth/llama-3-8b", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-05-18T17:19:51Z
--- base_model: unsloth/llama-3-8b tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** polyglots - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
mradermacher/Nemo-asstr-train-i1-GGUF
mradermacher
2025-05-18T14:50:58Z
4
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:Edens-Gate/Nemo-asstr-train", "base_model:quantized:Edens-Gate/Nemo-asstr-train", "endpoints_compatible", "region:us", "imatrix" ]
null
2025-02-13T00:12:00Z
--- base_model: Edens-Gate/Nemo-asstr-train language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/Edens-Gate/Nemo-asstr-train <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Nemo-asstr-train-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Nemo-asstr-train-i1-GGUF/resolve/main/Nemo-asstr-train.i1-IQ1_S.gguf) | i1-IQ1_S | 2.2 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Nemo-asstr-train-i1-GGUF/resolve/main/Nemo-asstr-train.i1-IQ1_M.gguf) | i1-IQ1_M | 2.4 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Nemo-asstr-train-i1-GGUF/resolve/main/Nemo-asstr-train.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.6 | | | [GGUF](https://huggingface.co/mradermacher/Nemo-asstr-train-i1-GGUF/resolve/main/Nemo-asstr-train.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/Nemo-asstr-train-i1-GGUF/resolve/main/Nemo-asstr-train.i1-IQ2_S.gguf) | i1-IQ2_S | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Nemo-asstr-train-i1-GGUF/resolve/main/Nemo-asstr-train.i1-IQ2_M.gguf) | i1-IQ2_M | 3.2 | | | [GGUF](https://huggingface.co/mradermacher/Nemo-asstr-train-i1-GGUF/resolve/main/Nemo-asstr-train.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.2 | very low quality | | [GGUF](https://huggingface.co/mradermacher/Nemo-asstr-train-i1-GGUF/resolve/main/Nemo-asstr-train.i1-Q2_K.gguf) | i1-Q2_K | 3.4 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Nemo-asstr-train-i1-GGUF/resolve/main/Nemo-asstr-train.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Nemo-asstr-train-i1-GGUF/resolve/main/Nemo-asstr-train.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Nemo-asstr-train-i1-GGUF/resolve/main/Nemo-asstr-train.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.9 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Nemo-asstr-train-i1-GGUF/resolve/main/Nemo-asstr-train.i1-IQ3_S.gguf) | i1-IQ3_S | 4.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Nemo-asstr-train-i1-GGUF/resolve/main/Nemo-asstr-train.i1-IQ3_M.gguf) | i1-IQ3_M | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Nemo-asstr-train-i1-GGUF/resolve/main/Nemo-asstr-train.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.3 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Nemo-asstr-train-i1-GGUF/resolve/main/Nemo-asstr-train.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.6 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Nemo-asstr-train-i1-GGUF/resolve/main/Nemo-asstr-train.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/Nemo-asstr-train-i1-GGUF/resolve/main/Nemo-asstr-train.i1-Q4_0.gguf) | i1-Q4_0 | 5.0 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Nemo-asstr-train-i1-GGUF/resolve/main/Nemo-asstr-train.i1-IQ4_NL.gguf) | i1-IQ4_NL | 5.0 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Nemo-asstr-train-i1-GGUF/resolve/main/Nemo-asstr-train.i1-Q4_K_S.gguf) | i1-Q4_K_S | 5.0 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Nemo-asstr-train-i1-GGUF/resolve/main/Nemo-asstr-train.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Nemo-asstr-train-i1-GGUF/resolve/main/Nemo-asstr-train.i1-Q4_1.gguf) | i1-Q4_1 | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/Nemo-asstr-train-i1-GGUF/resolve/main/Nemo-asstr-train.i1-Q5_K_S.gguf) | i1-Q5_K_S | 6.0 | | | [GGUF](https://huggingface.co/mradermacher/Nemo-asstr-train-i1-GGUF/resolve/main/Nemo-asstr-train.i1-Q5_K_M.gguf) | i1-Q5_K_M | 6.1 | | | [GGUF](https://huggingface.co/mradermacher/Nemo-asstr-train-i1-GGUF/resolve/main/Nemo-asstr-train.i1-Q6_K.gguf) | i1-Q6_K | 7.0 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
sonnyhuang/flareai2
sonnyhuang
2025-05-18T12:02:19Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-05-18T11:35:57Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: YN6dgFOtegH5OHG52RVY2 --- # Flareai2 <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `YN6dgFOtegH5OHG52RVY2` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "YN6dgFOtegH5OHG52RVY2", "lora_weights": "https://huggingface.co/sonnyhuang/flareai2/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('sonnyhuang/flareai2', weight_name='lora.safetensors') image = pipeline('YN6dgFOtegH5OHG52RVY2').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/sonnyhuang/flareai2/discussions) to add images that show off what you’ve made with this LoRA.
johngreendr1/SFT-ddcfe010-c922-4e65-8b83-0a6a804ee701
johngreendr1
2025-05-18T10:39:02Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-0.5B", "base_model:adapter:Qwen/Qwen1.5-0.5B", "region:us" ]
null
2025-05-18T10:38:58Z
--- base_model: Qwen/Qwen1.5-0.5B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.1
dosense/pegasus-samsum
dosense
2025-05-18T10:13:01Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "pegasus", "text2text-generation", "generated_from_trainer", "base_model:google/pegasus-cnn_dailymail", "base_model:finetune:google/pegasus-cnn_dailymail", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2025-05-18T10:11:51Z
--- library_name: transformers base_model: google/pegasus-cnn_dailymail tags: - generated_from_trainer model-index: - name: pegasus-samsum results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pegasus-samsum This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.4228 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 3.2153 | 0.0109 | 10 | 2.5966 | | 2.9496 | 0.0217 | 20 | 2.5595 | | 3.2505 | 0.0326 | 30 | 2.5065 | | 3.1262 | 0.0434 | 40 | 2.4274 | | 2.8421 | 0.0543 | 50 | 2.3320 | | 2.7148 | 0.0652 | 60 | 2.2392 | | 2.5994 | 0.0760 | 70 | 2.1579 | | 2.6505 | 0.0869 | 80 | 2.0776 | | 2.4761 | 0.0977 | 90 | 2.0039 | | 2.5505 | 0.1086 | 100 | 1.9527 | | 2.1564 | 0.1195 | 110 | 1.9061 | | 2.2488 | 0.1303 | 120 | 1.8633 | | 2.1399 | 0.1412 | 130 | 1.8229 | | 2.1177 | 0.1520 | 140 | 1.7878 | | 2.2764 | 0.1629 | 150 | 1.7655 | | 1.9904 | 0.1738 | 160 | 1.7464 | | 1.9908 | 0.1846 | 170 | 1.7293 | | 1.9265 | 0.1955 | 180 | 1.7115 | | 1.8753 | 0.2064 | 190 | 1.6908 | | 1.8792 | 0.2172 | 200 | 1.6749 | | 1.8695 | 0.2281 | 210 | 1.6602 | | 1.8776 | 0.2389 | 220 | 1.6474 | | 1.8557 | 0.2498 | 230 | 1.6333 | | 1.7715 | 0.2607 | 240 | 1.6211 | | 1.783 | 0.2715 | 250 | 1.6118 | | 1.7804 | 0.2824 | 260 | 1.6052 | | 1.7298 | 0.2932 | 270 | 1.5960 | | 1.7373 | 0.3041 | 280 | 1.5860 | | 1.8356 | 0.3150 | 290 | 1.5775 | | 1.6685 | 0.3258 | 300 | 1.5680 | | 1.8286 | 0.3367 | 310 | 1.5616 | | 1.8783 | 0.3475 | 320 | 1.5554 | | 1.8356 | 0.3584 | 330 | 1.5521 | | 1.7362 | 0.3693 | 340 | 1.5460 | | 1.7617 | 0.3801 | 350 | 1.5421 | | 1.6354 | 0.3910 | 360 | 1.5377 | | 1.7396 | 0.4018 | 370 | 1.5315 | | 1.7178 | 0.4127 | 380 | 1.5272 | | 1.7144 | 0.4236 | 390 | 1.5248 | | 1.7309 | 0.4344 | 400 | 1.5192 | | 1.7003 | 0.4453 | 410 | 1.5142 | | 1.6372 | 0.4561 | 420 | 1.5104 | | 1.7462 | 0.4670 | 430 | 1.5058 | | 1.7235 | 0.4779 | 440 | 1.5016 | | 1.6643 | 0.4887 | 450 | 1.5025 | | 1.7226 | 0.4996 | 460 | 1.4938 | | 1.7068 | 0.5105 | 470 | 1.4875 | | 1.626 | 0.5213 | 480 | 1.4866 | | 1.6784 | 0.5322 | 490 | 1.4843 | | 1.6674 | 0.5430 | 500 | 1.4836 | | 1.6622 | 0.5539 | 510 | 1.4824 | | 1.654 | 0.5648 | 520 | 1.4775 | | 1.6911 | 0.5756 | 530 | 1.4736 | | 1.5729 | 0.5865 | 540 | 1.4687 | | 1.6704 | 0.5973 | 550 | 1.4654 | | 1.6982 | 0.6082 | 560 | 1.4613 | | 1.6824 | 0.6191 | 570 | 1.4586 | | 1.6208 | 0.6299 | 580 | 1.4574 | | 1.5453 | 0.6408 | 590 | 1.4557 | | 1.6591 | 0.6516 | 600 | 1.4574 | | 1.5355 | 0.6625 | 610 | 1.4543 | | 1.6337 | 0.6734 | 620 | 1.4545 | | 1.6499 | 0.6842 | 630 | 1.4522 | | 1.6364 | 0.6951 | 640 | 1.4474 | | 1.5504 | 0.7059 | 650 | 1.4456 | | 1.5548 | 0.7168 | 660 | 1.4459 | | 1.5896 | 0.7277 | 670 | 1.4462 | | 1.5626 | 0.7385 | 680 | 1.4417 | | 1.5659 | 0.7494 | 690 | 1.4391 | | 1.6274 | 0.7602 | 700 | 1.4354 | | 1.5954 | 0.7711 | 710 | 1.4352 | | 1.5664 | 0.7820 | 720 | 1.4353 | | 1.5319 | 0.7928 | 730 | 1.4346 | | 1.6593 | 0.8037 | 740 | 1.4341 | | 1.5734 | 0.8146 | 750 | 1.4327 | | 1.5889 | 0.8254 | 760 | 1.4332 | | 1.5453 | 0.8363 | 770 | 1.4346 | | 1.5532 | 0.8471 | 780 | 1.4325 | | 1.5616 | 0.8580 | 790 | 1.4310 | | 1.6338 | 0.8689 | 800 | 1.4296 | | 1.5428 | 0.8797 | 810 | 1.4279 | | 1.6433 | 0.8906 | 820 | 1.4271 | | 1.5936 | 0.9014 | 830 | 1.4262 | | 1.5273 | 0.9123 | 840 | 1.4259 | | 1.573 | 0.9232 | 850 | 1.4259 | | 1.5828 | 0.9340 | 860 | 1.4249 | | 1.5597 | 0.9449 | 870 | 1.4242 | | 1.5178 | 0.9557 | 880 | 1.4235 | | 1.5319 | 0.9666 | 890 | 1.4232 | | 1.5786 | 0.9775 | 900 | 1.4230 | | 1.5232 | 0.9883 | 910 | 1.4229 | | 1.5857 | 0.9992 | 920 | 1.4228 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
dimasik87/143dff84-6183-4ed9-9712-02ffc660a9bd
dimasik87
2025-05-18T10:10:50Z
0
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "llama", "text-generation", "generated_from_trainer", "axolotl", "dpo", "trl", "conversational", "arxiv:2305.18290", "base_model:NousResearch/Nous-Capybara-7B-V1", "base_model:quantized:NousResearch/Nous-Capybara-7B-V1", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-05-18T09:10:43Z
--- base_model: NousResearch/Nous-Capybara-7B-V1 library_name: transformers model_name: 143dff84-6183-4ed9-9712-02ffc660a9bd tags: - generated_from_trainer - axolotl - dpo - trl licence: license --- # Model Card for 143dff84-6183-4ed9-9712-02ffc660a9bd This model is a fine-tuned version of [NousResearch/Nous-Capybara-7B-V1](https://huggingface.co/NousResearch/Nous-Capybara-7B-V1). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="dimasik87/143dff84-6183-4ed9-9712-02ffc660a9bd", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dedok-yo/s56-7/runs/a8utuqdj) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.12.0.dev0 - Transformers: 4.46.0 - Pytorch: 2.5.0+cu124 - Datasets: 3.0.1 - Tokenizers: 0.20.1 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
KingEmpire/sn21_omega_1805_2
KingEmpire
2025-05-18T09:35:10Z
0
0
null
[ "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-05-18T09:19:27Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
matosmduyerfyq/fbdfb
matosmduyerfyq
2025-05-18T07:54:00Z
0
0
null
[ "license:bigcode-openrail-m", "region:us" ]
null
2025-05-18T07:54:00Z
--- license: bigcode-openrail-m ---
Mohammad87/abir
Mohammad87
2025-05-18T06:57:09Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-18T06:57:09Z
--- license: apache-2.0 ---
ccclllwww/smoker_cls_base_V4
ccclllwww
2025-05-18T06:01:08Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2025-05-18T05:49:27Z
--- library_name: transformers license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: smoker_cls_base_V4 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.8811881188118812 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smoker_cls_base_V4 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.3615 - Accuracy: 0.8812 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.6591 | 1.0 | 15 | 0.5081 | 0.8911 | | 0.3677 | 2.0 | 30 | 0.3530 | 0.9010 | | 0.3273 | 2.8421 | 42 | 0.3459 | 0.8911 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
nis12ram/gemma3-1b-it-hindiNER-ner-exp1
nis12ram
2025-05-18T03:31:33Z
0
0
transformers
[ "transformers", "safetensors", "gemma3_text", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "base_model:unsloth/gemma-3-1b-it", "base_model:finetune:unsloth/gemma-3-1b-it", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-05-18T03:28:21Z
--- base_model: unsloth/gemma-3-1b-it tags: - text-generation-inference - transformers - unsloth - gemma3_text license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** nis12ram - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-3-1b-it This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
qingy2024/Formatter-0.6B
qingy2024
2025-05-17T23:08:06Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "text-generation-inference", "unsloth", "qwen2", "trl", "sft", "conversational", "en", "base_model:unsloth/Qwen3-0.6B-Base", "base_model:finetune:unsloth/Qwen3-0.6B-Base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-05-17T22:45:16Z
--- base_model: unsloth/Qwen3-0.6B-Base tags: - text-generation-inference - transformers - unsloth - qwen2 - trl - sft license: apache-2.0 language: - en --- # Formatter 0.6B - **Developed by:** qingy2024 - **License:** apache-2.0 - **Finetuned from model :** Qwen3 0.6B (base) This is mainly my experiment to play around with adding special tokens and changing the chat template while fine-tuning. ```jinja {%- set last_message = messages[-1] -%} {%- if last_message.role == "user" -%} {{- '<|problem_start|>\n' + last_message.content + '<|problem_end|>\n' -}} {%- elif last_message.role == "assistant" -%} {%- for message in messages -%} {%- if message.role == "user" -%} {{- '<|problem_start|>\n' + message.content + '<|problem_end|>\n' -}} {%- elif message.role == "assistant" -%} {{- '<|formatted_problem_start|>\n' + message.content + '<|formatted_problem_end|>\n' -}} {%- else -%} {{- raise('Unknown role: ' + message.role) -}} {%- endif -%} {%- endfor -%} {%- else -%} {{- raise('Unsupported role: ' + last_message.role) -}} {%- endif -%} {%- if add_generation_prompt and last_message.role == "user" -%} {{- '<|formatted_problem_start|>\n' -}} {%- endif -%} ``` Example: ``` User: Read the excerpt from Dr. Martin Luther King Jr.’s "I Have a Dream" speech. I am not unmindful that some of you have come here out of great trials and tribulations. Some of you have come fresh from narrow jail cells. Some of you have come from areas where your quest for freedom left you battered by the storms of persecution and staggered by the winds of police brutality. You have been the veterans of creative suffering. Continue to work with the faith that unearned suffering is redemptive. Go back to Mississippi, go back to Alabama, go back to South Carolina, go back to Georgia, go back to Louisiana, go back to the slums and ghettos of our northern cities, knowing that somehow this situation can and will be changed. Let us not wallow in the valley of despair. Which lines in this paragraph can be used as examples of metaphor? Select 3 options. great trials and tribulations storms of persecution winds of police brutality go back to Georgia this situation can and will be changed let us not wallow in the valley of despair ``` ``` LLM: Read the excerpt from Dr. Martin Luther King Jr.’s "I Have a Dream" speech. I am not unmindful that some of you have come here out of great trials and tribulations. Some of you have come fresh from narrow jail cells. Some of you have come from areas where your quest for freedom left you battered by the storms of persecution and staggered by the winds of police brutality. You have been the veterans of creative suffering. Continue to work with the faith that unearned suffering is redemptive. Go back to Mississippi, go back to Alabama, go back to South Carolina, go back to Georgia, go back to Louisiana, go back to the slums and ghettos of our northern cities, knowing that somehow this situation can and will be changed. Let us not wallow in the valley of despair. Which lines in this paragraph can be used as examples of metaphor? Select 3 options. A. great trials and tribulations B. storms of persecution C. winds of police brutality D. go back to Georgia E. this situation can and will be changed F. let us not wallow in the valley of despair ``` ### Lessons Learned - When adding new tokens to the model, LoRA will be much worse. Use full fine-tuning to get better results. - Be very careful about chat templates. Every character/new line/space matters and not following that can make the model have worse performance. - For Qwen base models, leave the `<|endoftext|>` as the EOS token. Then you can train it to use other tokens like `<|im_end|>`. If you set the EOS token to `<|im_end|>`, the model will get confused. - For Qwen models in general, always put the `<|endoftext|>` at the end of each training example. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
haihp02/f0a36cb6-fcb0-4099-8401-3a75e824729a-adapter
haihp02
2025-05-17T20:18:06Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "dpo", "arxiv:2305.18290", "base_model:unsloth/Qwen2.5-Coder-1.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-Coder-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-17T20:17:37Z
--- base_model: unsloth/Qwen2.5-Coder-1.5B-Instruct library_name: transformers model_name: f0a36cb6-fcb0-4099-8401-3a75e824729a-adapter tags: - generated_from_trainer - trl - sft - dpo licence: license --- # Model Card for f0a36cb6-fcb0-4099-8401-3a75e824729a-adapter This model is a fine-tuned version of [unsloth/Qwen2.5-Coder-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-Coder-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="haihp02/f0a36cb6-fcb0-4099-8401-3a75e824729a-adapter", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/trunghainguyenhp02/sn56-dpo-train/runs/bigv4zvb) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.7.0+cu126 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
JoshMe1/289e7cc5-a10e-4bec-b9fd-075a5e758196
JoshMe1
2025-05-17T11:47:38Z
0
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:NousResearch/Nous-Hermes-2-Mistral-7B-DPO", "base_model:adapter:NousResearch/Nous-Hermes-2-Mistral-7B-DPO", "license:apache-2.0", "region:us" ]
null
2025-05-17T04:20:45Z
--- library_name: peft license: apache-2.0 base_model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO tags: - axolotl - generated_from_trainer model-index: - name: 289e7cc5-a10e-4bec-b9fd-075a5e758196 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO bf16: false chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 4b7711219ba29c24_train_data.json ds_type: json field: subject path: /workspace/input_data/4b7711219ba29c24_train_data.json type: completion debug: null deepspeed: null device_map: auto early_stopping_patience: 3 ema_decay: 0.9992 eval_max_new_tokens: 128 eval_steps: 100 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: true fsdp: null fsdp_config: null gradient_accumulation_steps: 8 gradient_checkpointing: true greater_is_better: false group_by_length: false hub_model_id: JoshMe1/289e7cc5-a10e-4bec-b9fd-075a5e758196 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 5.0e-06 load_best_model_at_end: true load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 256 lora_dropout: 0.1 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: reduce_lr_on_plateau lr_scheduler_factor: 0.5 lr_scheduler_patience: 2 max_grad_norm: 0.3 max_memory: 0: 130GB max_steps: 500 metric_for_best_model: eval_loss micro_batch_size: 2 mlflow_experiment_name: /tmp/4b7711219ba29c24_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_hf output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 100 saves_per_epoch: null sequence_len: 2048 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true use_ema: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: c1843560-f26f-4c08-b824-8abf85012863 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: c1843560-f26f-4c08-b824-8abf85012863 warmup_ratio: 0.03 weight_decay: 0.01 xformers_attention: null ``` </details><br> # 289e7cc5-a10e-4bec-b9fd-075a5e758196 This model is a fine-tuned version of [NousResearch/Nous-Hermes-2-Mistral-7B-DPO](https://huggingface.co/NousResearch/Nous-Hermes-2-Mistral-7B-DPO) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.4726 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_HF with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: reduce_lr_on_plateau - lr_scheduler_warmup_steps: 15 - training_steps: 500 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0001 | 1 | 7.0319 | | 31.2182 | 0.0055 | 100 | 3.7290 | | 29.4294 | 0.0110 | 200 | 3.5411 | | 29.0039 | 0.0166 | 300 | 3.5150 | | 28.2303 | 0.0221 | 400 | 3.4909 | | 27.1671 | 0.0276 | 500 | 3.4726 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
narpas/Alkahest-V4-LLaMa-70B-6.0bpw-h8-exl2
narpas
2025-05-17T05:54:49Z
8
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2311.03099", "base_model:Tarek07/Alkahest-V4-LLaMa-70B", "base_model:quantized:Tarek07/Alkahest-V4-LLaMa-70B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "6-bit", "exl2", "region:us" ]
text-generation
2025-04-19T12:58:55Z
--- base_model: - Tarek07/Alkahest-V4-LLaMa-70B library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using [TareksLab/Stylizer-V2-LLaMa-70B](https://huggingface.co/TareksLab/Stylizer-V2-LLaMa-70B) as a base. ### Models Merged The following models were included in the merge: * [TareksLab/Dungeons-and-Dragons-V1.2-LLaMa-70B](https://huggingface.co/TareksLab/Dungeons-and-Dragons-V1.2-LLaMa-70B) * [TareksLab/Malediction-V2-LLaMa-70B](https://huggingface.co/TareksLab/Malediction-V2-LLaMa-70B) * [TareksLab/Wordsmith-V9-LLaMa-70B](https://huggingface.co/TareksLab/Wordsmith-V9-LLaMa-70B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: TareksLab/Wordsmith-V9-LLaMa-70B parameters: weight: 0.25 density: 0.5 - model: TareksLab/Malediction-V2-LLaMa-70B parameters: weight: 0.25 density: 0.5 - model: TareksLab/Dungeons-and-Dragons-V1.2-LLaMa-70B parameters: weight: 0.25 density: 0.5 - model: TareksLab/Stylizer-V2-LLaMa-70B parameters: weight: 0.25 density: 0.5 merge_method: dare_ties base_model: TareksLab/Stylizer-V2-LLaMa-70B parameters: normalize: false out_dtype: bfloat16 chat_template: llama3 tokenizer: source: base ```