modelId
string
author
string
last_modified
timestamp[us, tz=UTC]
downloads
int64
likes
int64
library_name
string
tags
sequence
pipeline_tag
string
createdAt
timestamp[us, tz=UTC]
card
string
aleegis/d3531ad9-a1cb-4904-9994-ddb985e19ca3
aleegis
2025-04-28T06:18:17Z
0
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:teknium/OpenHermes-2.5-Mistral-7B", "base_model:adapter:teknium/OpenHermes-2.5-Mistral-7B", "license:apache-2.0", "region:us" ]
null
2025-04-28T04:58:18Z
--- library_name: peft license: apache-2.0 base_model: teknium/OpenHermes-2.5-Mistral-7B tags: - axolotl - generated_from_trainer model-index: - name: d3531ad9-a1cb-4904-9994-ddb985e19ca3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: teknium/OpenHermes-2.5-Mistral-7B bf16: auto chat_template: llama3 dataloader_num_workers: 12 dataset_prepared_path: null datasets: - data_files: - 0117447d3950c946_train_data.json ds_type: json format: custom path: /workspace/input_data/0117447d3950c946_train_data.json type: field_instruction: first_message field_output: first_answer format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_steps: null eval_table_size: null evals_per_epoch: null flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 8 gradient_checkpointing: false group_by_length: false hub_model_id: aleegis/d3531ad9-a1cb-4904-9994-ddb985e19ca3 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: null lora_alpha: 32 lora_dropout: 0.15 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true loraplus_lr_embedding: 1.0e-06 loraplus_lr_ratio: 16 lr_scheduler: cosine max_grad_norm: 1 max_steps: 1500 micro_batch_size: 2 mlflow_experiment_name: /tmp/0117447d3950c946_train_data.json model_type: AutoModelForCausalLM num_epochs: 200 optimizer: adamw_torch_fused output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: null save_total_limit: 10 saves_per_epoch: 0 sequence_len: 1024 special_tokens: pad_token: <|im_end|> strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.0 wandb_entity: null wandb_mode: online wandb_name: dace43b8-8ffb-4c18-baa0-ebd02df71793 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: dace43b8-8ffb-4c18-baa0-ebd02df71793 warmup_steps: 100 weight_decay: 0 xformers_attention: null ``` </details><br> # d3531ad9-a1cb-4904-9994-ddb985e19ca3 This model is a fine-tuned version of [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 1500 ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
jtromero/qwen2-0.5b-phase2-csn-lora-ff
jtromero
2025-04-28T06:16:20Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "en", "arxiv:2407.10671", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-28T06:16:08Z
--- license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen2.5-0.5B/blob/main/LICENSE language: - en pipeline_tag: text-generation library_name: transformers --- # Qwen2.5-0.5B ## Introduction Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2: - Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains. - Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots. - **Long-context Support** up to 128K tokens and can generate up to 8K tokens. - **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more. **This repo contains the base 0.5B Qwen2.5 model**, which has the following features: - Type: Causal Language Models - Training Stage: Pretraining - Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings - Number of Parameters: 0.49B - Number of Paramaters (Non-Embedding): 0.36B - Number of Layers: 24 - Number of Attention Heads (GQA): 14 for Q and 2 for KV - Context Length: Full 32,768 tokens **We do not recommend using base language models for conversations.** Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., on this model. For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/). ## Requirements The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`. With `transformers<4.37.0`, you will encounter the following error: ``` KeyError: 'qwen2' ``` ## Evaluation & Performance Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5/). For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html). ## Citation If you find our work helpful, feel free to give us a cite. ``` @misc{qwen2.5, title = {Qwen2.5: A Party of Foundation Models}, url = {https://qwenlm.github.io/blog/qwen2.5/}, author = {Qwen Team}, month = {September}, year = {2024} } @article{qwen2, title={Qwen2 Technical Report}, author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan}, journal={arXiv preprint arXiv:2407.10671}, year={2024} } ```
root-jlee/q-Taxi-v3-g100
root-jlee
2025-04-28T06:16:19Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2025-04-28T06:16:08Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3-g100 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="root-jlee/q-Taxi-v3-g100", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
tuKdfm83Jkp/jidkyf
tuKdfm83Jkp
2025-04-28T06:14:47Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-04-28T06:14:47Z
--- license: apache-2.0 ---
HozEWaQ33xY/HozEWaQ33xY
HozEWaQ33xY
2025-04-28T06:09:15Z
0
0
null
[ "license:bigcode-openrail-m", "region:us" ]
null
2025-04-28T06:09:15Z
--- license: bigcode-openrail-m ---
VITA-MLLM/Long-VITA-1M_HF
VITA-MLLM
2025-04-28T06:07:42Z
10
1
null
[ "safetensors", "long_vita", "custom_code", "dataset:VITA-MLLM/Long-VITA-Training-Data", "base_model:VITA-MLLM/Long-VITA-128K", "base_model:finetune:VITA-MLLM/Long-VITA-128K", "license:apache-2.0", "region:us" ]
null
2025-02-15T04:48:05Z
--- license: apache-2.0 datasets: - VITA-MLLM/Long-VITA-Training-Data base_model: - VITA-MLLM/Long-VITA-128K --- # Long-VITA-1M Github: https://github.com/VITA-MLLM/Long-VITA ## 👀 Overview Long-VITA is a strong long-context visual language model and supports more than 1 million tokens. - Long-VITA-1M weights are trained on Ascend NPUs with MindSpeed. The original weight is at https://huggingface.co/VITA-MLLM/Long-VITA-1M. - We also implemented Long-VITA on Megatron with the Transformer Engine to infer and evaluate on Nvidia GPUs. The converted weight is at https://huggingface.co/VITA-MLLM/Long-VITA-1M_MG. - We also implemented Long-VITA on DeepSpeed with the Huggingface Transformers to infer and evaluate on Nvidia GPUs. The converted weight is at https://huggingface.co/VITA-MLLM/Long-VITA-1M_HF. ## 📈 Experimental Results - **Comparison of image understanding**. ![image](https://github.com/user-attachments/assets/235bdb0e-37e6-4a5f-b20b-21b0bb83278a) ![image](https://github.com/user-attachments/assets/72250c5b-7d33-4dba-98ab-0539bae08703) - **Comparison of video understanding**. ![image](https://github.com/user-attachments/assets/7f09662b-bd53-4504-927a-0e45214a049d) ![image](https://github.com/user-attachments/assets/87bd2f4d-baf5-4a63-8002-151e30f52147) - **Effectiveness of Logits-Masked LM Head**. ![image](https://github.com/user-attachments/assets/7a06b4dd-267c-470f-80f2-d26c87e23460) ## Models Model | LLM Size | Training Context | Training Frames | MindSpeed Weights | Megatron Weights | Huggingface Weights ---------------:|---------:|-----------------:|----------------:|------------------------------------------------:|---------------------------------------------------:|---------------------------------------------------: Long-VITA-16K | 14B | 16,384 | 64 | https://huggingface.co/VITA-MLLM/Long-VITA-16K | https://huggingface.co/VITA-MLLM/Long-VITA-16K_MG | https://huggingface.co/VITA-MLLM/Long-VITA-16K_HF Long-VITA-128K | 14B | 131,072 | 512 | https://huggingface.co/VITA-MLLM/Long-VITA-128K | https://huggingface.co/VITA-MLLM/Long-VITA-128K_MG | https://huggingface.co/VITA-MLLM/Long-VITA-128K_HF Long-VITA-1M | 14B | 1,048,576 | 4,096 | https://huggingface.co/VITA-MLLM/Long-VITA-1M | https://huggingface.co/VITA-MLLM/Long-VITA-1M_MG | https://huggingface.co/VITA-MLLM/Long-VITA-1M_HF ## ACCEPTABLE USE POLICY Any license on the model is subject to your compliance with the Acceptable Use Policy, and You must not violate (or encourage or permit anyone else to violate) any term of the Acceptable Use Policy. Tencent reserves the right to update this Acceptable Use Policy from time to time. Tencent endeavors to promote safe and fair use of its tools and features, including VITA. You agree not to use VITA or any of its derivatives: 1. In any way that violates any applicable national, federal, state, local, international or any other law or regulation; 2. To harm Yourself or others; 3. To repurpose or distribute output from VITA or any of its derivatives to harm Yourself or others; 4. To override or circumvent the safety guardrails and safeguards We have put in place; 5. For the purpose of exploiting, harming or attempting to exploit or harm minors in any way; 6. To generate or disseminate verifiably false information and/or content with the purpose of harming others or influencing elections; 7. To generate or facilitate false online engagement, including fake reviews and other means of fake online engagement; 8. To intentionally defame, disparage or otherwise harass others; 9. To generate and/or disseminate malware (including ransomware) or any other content to be used for the purpose of harming electronic systems; 10. To generate or disseminate personal identifiable information with the purpose of harming others; 11. To generate or disseminate information (including images, code, posts, articles), and place the information in any public context (including –through the use of bot generated tweets), without expressly and conspicuously identifying that the information and/or content is machine generated; 12. To impersonate another individual without consent, authorization, or legal right; 13. To make high-stakes automated decisions in domains that affect an individual’s safety, rights or wellbeing (e.g., law enforcement, migration, medicine/health, management of critical infrastructure, safety components of products, essential services, credit, employment, housing, education, social scoring, or insurance); 14. In a manner that violates or disrespects the social ethics and moral standards of other countries or regions; 15. To perform, facilitate, threaten, incite, plan, promote or encourage violent extremism or terrorism; 16. For any use intended to discriminate against or harm individuals or groups based on protected characteristics or categories, online or offline social behavior or known or predicted personal or personality characteristics; 17. To intentionally exploit any of the vulnerabilities of a specific group of persons based on their age, social, physical or mental characteristics, in order to materially distort the behavior of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm; 18. For military purposes; 19. To engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or other professional practices.
VITA-MLLM/Long-VITA-128K_MG
VITA-MLLM
2025-04-28T06:06:44Z
0
1
null
[ "dataset:VITA-MLLM/Long-VITA-Training-Data", "base_model:VITA-MLLM/Long-VITA-16K", "base_model:finetune:VITA-MLLM/Long-VITA-16K", "license:apache-2.0", "region:us" ]
null
2024-12-23T03:22:03Z
--- license: apache-2.0 datasets: - VITA-MLLM/Long-VITA-Training-Data base_model: - VITA-MLLM/Long-VITA-16K --- # Long-VITA-128K Github: https://github.com/VITA-MLLM/Long-VITA ## 👀 Overview Long-VITA is a strong long-context visual language model and supports more than 1 million tokens. - Long-VITA-128K weights are trained on Ascend NPUs with MindSpeed. The original weight is at https://huggingface.co/VITA-MLLM/Long-VITA-128K. - We also implemented Long-VITA on Megatron with the Transformer Engine to infer and evaluate on Nvidia GPUs. The converted weight is at https://huggingface.co/VITA-MLLM/Long-VITA-128K_MG. - We also implemented Long-VITA on DeepSpeed with the Huggingface Transformers to infer and evaluate on Nvidia GPUs. The converted weight is at https://huggingface.co/VITA-MLLM/Long-VITA-128K_HF. ## 📈 Experimental Results - **Comparison of image understanding**. ![image](https://github.com/user-attachments/assets/235bdb0e-37e6-4a5f-b20b-21b0bb83278a) ![image](https://github.com/user-attachments/assets/72250c5b-7d33-4dba-98ab-0539bae08703) - **Comparison of video understanding**. ![image](https://github.com/user-attachments/assets/7f09662b-bd53-4504-927a-0e45214a049d) ![image](https://github.com/user-attachments/assets/87bd2f4d-baf5-4a63-8002-151e30f52147) - **Effectiveness of Logits-Masked LM Head**. ![image](https://github.com/user-attachments/assets/7a06b4dd-267c-470f-80f2-d26c87e23460) ## Models Model | LLM Size | Training Context | Training Frames | MindSpeed Weights | Megatron Weights | Huggingface Weights ---------------:|---------:|-----------------:|----------------:|------------------------------------------------:|---------------------------------------------------:|---------------------------------------------------: Long-VITA-16K | 14B | 16,384 | 64 | https://huggingface.co/VITA-MLLM/Long-VITA-16K | https://huggingface.co/VITA-MLLM/Long-VITA-16K_MG | https://huggingface.co/VITA-MLLM/Long-VITA-16K_HF Long-VITA-128K | 14B | 131,072 | 512 | https://huggingface.co/VITA-MLLM/Long-VITA-128K | https://huggingface.co/VITA-MLLM/Long-VITA-128K_MG | https://huggingface.co/VITA-MLLM/Long-VITA-128K_HF Long-VITA-1M | 14B | 1,048,576 | 4,096 | https://huggingface.co/VITA-MLLM/Long-VITA-1M | https://huggingface.co/VITA-MLLM/Long-VITA-1M_MG | https://huggingface.co/VITA-MLLM/Long-VITA-1M_HF ## ACCEPTABLE USE POLICY Any license on the model is subject to your compliance with the Acceptable Use Policy, and You must not violate (or encourage or permit anyone else to violate) any term of the Acceptable Use Policy. Tencent reserves the right to update this Acceptable Use Policy from time to time. Tencent endeavors to promote safe and fair use of its tools and features, including VITA. You agree not to use VITA or any of its derivatives: 1. In any way that violates any applicable national, federal, state, local, international or any other law or regulation; 2. To harm Yourself or others; 3. To repurpose or distribute output from VITA or any of its derivatives to harm Yourself or others; 4. To override or circumvent the safety guardrails and safeguards We have put in place; 5. For the purpose of exploiting, harming or attempting to exploit or harm minors in any way; 6. To generate or disseminate verifiably false information and/or content with the purpose of harming others or influencing elections; 7. To generate or facilitate false online engagement, including fake reviews and other means of fake online engagement; 8. To intentionally defame, disparage or otherwise harass others; 9. To generate and/or disseminate malware (including ransomware) or any other content to be used for the purpose of harming electronic systems; 10. To generate or disseminate personal identifiable information with the purpose of harming others; 11. To generate or disseminate information (including images, code, posts, articles), and place the information in any public context (including –through the use of bot generated tweets), without expressly and conspicuously identifying that the information and/or content is machine generated; 12. To impersonate another individual without consent, authorization, or legal right; 13. To make high-stakes automated decisions in domains that affect an individual’s safety, rights or wellbeing (e.g., law enforcement, migration, medicine/health, management of critical infrastructure, safety components of products, essential services, credit, employment, housing, education, social scoring, or insurance); 14. In a manner that violates or disrespects the social ethics and moral standards of other countries or regions; 15. To perform, facilitate, threaten, incite, plan, promote or encourage violent extremism or terrorism; 16. For any use intended to discriminate against or harm individuals or groups based on protected characteristics or categories, online or offline social behavior or known or predicted personal or personality characteristics; 17. To intentionally exploit any of the vulnerabilities of a specific group of persons based on their age, social, physical or mental characteristics, in order to materially distort the behavior of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm; 18. For military purposes; 19. To engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or other professional practices.
sharatpc/ggbt
sharatpc
2025-04-28T06:01:36Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-04-28T06:01:36Z
--- license: apache-2.0 ---
AndyPark/gemma3_lora_dpo_chosen
AndyPark
2025-04-28T05:59:52Z
0
0
null
[ "safetensors", "gemma3", "license:apache-2.0", "region:us" ]
null
2025-04-28T04:58:20Z
--- license: apache-2.0 ---
hyoo14/gemma-3-1b-pt-meta_pathogen
hyoo14
2025-04-28T05:46:29Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-28T05:46:24Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
cata2002/llama-3-8b-full-dataset
cata2002
2025-04-28T05:45:14Z
0
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "base_model:quantized:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-04-28T05:43:59Z
--- base_model: unsloth/llama-3-8b-Instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** cata2002 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
nqdhocai/LogicLlama-3.2-3B-NoDes-v1
nqdhocai
2025-04-28T05:44:31Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:unsloth/Llama-3.2-3B-Instruct", "base_model:finetune:unsloth/Llama-3.2-3B-Instruct", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-04-28T05:40:45Z
--- base_model: unsloth/Llama-3.2-3B-Instruct tags: - text-generation-inference - transformers - unsloth - llama - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** nqdhocai - **License:** apache-2.0 - **Finetuned from model :** unsloth/Llama-3.2-3B-Instruct This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Triangle104/Qwen2.5-0.5B-Q4_K_S-GGUF
Triangle104
2025-04-28T05:36:47Z
4
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "text-generation", "zho", "eng", "fra", "spa", "por", "deu", "ita", "rus", "jpn", "kor", "vie", "tha", "ara", "base_model:Qwen/Qwen2.5-0.5B", "base_model:quantized:Qwen/Qwen2.5-0.5B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-09-22T17:53:22Z
--- base_model: Qwen/Qwen2.5-0.5B language: - zho - eng - fra - spa - por - deu - ita - rus - jpn - kor - vie - tha - ara license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen2.5-0.5B/blob/main/LICENSE pipeline_tag: text-generation tags: - llama-cpp - gguf-my-repo --- # Triangle104/Qwen2.5-0.5B-Q4_K_S-GGUF This model was converted to GGUF format from [`Qwen/Qwen2.5-0.5B`](https://huggingface.co/Qwen/Qwen2.5-0.5B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-0.5B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Qwen2.5-0.5B-Q4_K_S-GGUF --hf-file qwen2.5-0.5b-q4_k_s.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Qwen2.5-0.5B-Q4_K_S-GGUF --hf-file qwen2.5-0.5b-q4_k_s.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Qwen2.5-0.5B-Q4_K_S-GGUF --hf-file qwen2.5-0.5b-q4_k_s.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Qwen2.5-0.5B-Q4_K_S-GGUF --hf-file qwen2.5-0.5b-q4_k_s.gguf -c 2048 ```
Deekila-Sherpa-and-Aniket-Viral-Videos/Original.Viral.Clip.Deekila.Aniket.Viral.Video.Leaks.official
Deekila-Sherpa-and-Aniket-Viral-Videos
2025-04-28T05:35:49Z
0
0
null
[ "region:us" ]
null
2025-04-28T05:35:00Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/bded8knb?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> Deekila Aniket Viral Video is taking the internet by storm! This funny video has gone viral across social media platforms, making everyone laugh. If you're looking for the most hilarious trending clip of 2025, this is a must-watch. Users are sharing and reacting to this viral sensation, making it one of the top funny videos online. Whether you're here for a laugh or want to stay updated on viral trends, this video will keep you entertained. Watch now and share the fun with your friends! Get the latest updates, reactions, and full details on the Deekila Aniket viral funny video. Don't miss out—find out why this clip is making headlines everywhere!
Triangle104/Qwen2.5-1.5B-Q4_K_M-GGUF
Triangle104
2025-04-28T05:35:45Z
4
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "text-generation", "zho", "eng", "fra", "spa", "por", "deu", "ita", "rus", "jpn", "kor", "vie", "tha", "ara", "base_model:Qwen/Qwen2.5-1.5B", "base_model:quantized:Qwen/Qwen2.5-1.5B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-09-22T17:25:25Z
--- base_model: Qwen/Qwen2.5-1.5B language: - zho - eng - fra - spa - por - deu - ita - rus - jpn - kor - vie - tha - ara license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen2.5-1.5B/blob/main/LICENSE pipeline_tag: text-generation tags: - llama-cpp - gguf-my-repo --- # Triangle104/Qwen2.5-1.5B-Q4_K_M-GGUF This model was converted to GGUF format from [`Qwen/Qwen2.5-1.5B`](https://huggingface.co/Qwen/Qwen2.5-1.5B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-1.5B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Qwen2.5-1.5B-Q4_K_M-GGUF --hf-file qwen2.5-1.5b-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Qwen2.5-1.5B-Q4_K_M-GGUF --hf-file qwen2.5-1.5b-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Qwen2.5-1.5B-Q4_K_M-GGUF --hf-file qwen2.5-1.5b-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Qwen2.5-1.5B-Q4_K_M-GGUF --hf-file qwen2.5-1.5b-q4_k_m.gguf -c 2048 ```
OpenVINO/Qwen2.5-14B-Instruct-int8-ov
OpenVINO
2025-04-28T05:35:07Z
14
0
null
[ "openvino", "qwen2", "zho", "eng", "fra", "spa", "por", "deu", "ita", "rus", "jpn", "kor", "vie", "tha", "ara", "base_model:Qwen/Qwen2.5-14B-Instruct", "base_model:quantized:Qwen/Qwen2.5-14B-Instruct", "license:apache-2.0", "region:us" ]
null
2025-04-11T16:38:51Z
--- license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen2.5-14B-Instruct/blob/main/LICENSE base_model: - Qwen/Qwen2.5-14B-Instruct base_model_relation: quantized language: - zho - eng - fra - spa - por - deu - ita - rus - jpn - kor - vie - tha - ara --- # Qwen2.5-14B-Instruct-int8-ov * Model creator: [Qwen](https://huggingface.co/Qwen) * Original model: [Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) ## Description This is [Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) model converted to the [OpenVINO™ IR](https://docs.openvino.ai/2025/documentation/openvino-ir-format.html) (Intermediate Representation) format with weights compressed to INT8 by [NNCF](https://github.com/openvinotoolkit/nncf). ## Quantization Parameters Weight compression was performed using `nncf.compress_weights` with the following parameters: * mode: **INT8_ASYM** For more information on quantization, check the [OpenVINO model optimization guide](https://docs.openvino.ai/2025/openvino-workflow/model-optimization-guide/weight-compression.html). ## Compatibility The provided OpenVINO™ IR model is compatible with: * OpenVINO version 2025.1.0 and higher * Optimum Intel 1.24.0 and higher ## Running Model Inference with [Optimum Intel](https://huggingface.co/docs/optimum/intel/index) 1. Install packages required for using [Optimum Intel](https://huggingface.co/docs/optimum/intel/index) integration with the OpenVINO backend: ``` pip install optimum[openvino] ``` 2. Run model inference: ``` from transformers import AutoTokenizer from optimum.intel.openvino import OVModelForCausalLM model_id = "OpenVINO/qwen2.5-14b-instruct-int8-ov" tokenizer = AutoTokenizer.from_pretrained(model_id) model = OVModelForCausalLM.from_pretrained(model_id) inputs = tokenizer("What is OpenVINO?", return_tensors="pt") outputs = model.generate(**inputs, max_length=200) text = tokenizer.batch_decode(outputs)[0] print(text) ``` For more examples and possible optimizations, refer to the [Inference with Optimum Intel](https://docs.openvino.ai/2025/openvino-workflow-generative/inference-with-optimum-intel.html). ## Running Model Inference with [OpenVINO GenAI](https://github.com/openvinotoolkit/openvino.genai) 1. Install packages required for using OpenVINO GenAI. ``` pip install openvino-genai huggingface_hub ``` 2. Download model from HuggingFace Hub ``` import huggingface_hub as hf_hub model_id = "OpenVINO/qwen2.5-14b-instruct-int8-ov" model_path = "qwen2.5-14b-instruct-int8-ov" hf_hub.snapshot_download(model_id, local_dir=model_path) ``` 3. Run model inference: ``` import openvino_genai as ov_genai device = "CPU" pipe = ov_genai.LLMPipeline(model_path, device) print(pipe.generate("What is OpenVINO?", max_length=200)) ``` More GenAI usage examples can be found in OpenVINO GenAI library [docs](https://docs.openvino.ai/2025/openvino-workflow-generative/inference-with-genai.html) and [samples](https://github.com/openvinotoolkit/openvino.genai?tab=readme-ov-file#openvino-genai-samples) You can find more detaild usage examples in OpenVINO Notebooks: - [LLM](https://openvinotoolkit.github.io/openvino_notebooks/?search=LLM) - [RAG text generation](https://openvinotoolkit.github.io/openvino_notebooks/?search=RAG+system&tasks=Text+Generation) - [Convert models from ModelScope to OpenVINO](https://openvinotoolkit.github.io/openvino_notebooks/?search=Convert+models+from+ModelScope+to+OpenVINO) ## Limitations Check the original [model card](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) for limitations. ## Legal information The original model is distributed under [Apache License Version 2.0](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct/blob/main/LICENSE) license. More details can be found in [Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct). ## Disclaimer Intel is committed to respecting human rights and avoiding causing or contributing to adverse impacts on human rights. See [Intel’s Global Human Rights Principles](https://www.intel.com/content/dam/www/central-libraries/us/en/documents/policy-human-rights.pdf). Intel’s products and software are intended only to be used in applications that do not cause or contribute to adverse impacts on human rights.
OpenVINO/Qwen2.5-14B-Instruct-int4-ov
OpenVINO
2025-04-28T05:34:55Z
4
0
null
[ "openvino", "qwen2", "zho", "eng", "fra", "spa", "por", "deu", "ita", "rus", "jpn", "kor", "vie", "tha", "ara", "base_model:Qwen/Qwen2.5-14B-Instruct", "base_model:quantized:Qwen/Qwen2.5-14B-Instruct", "license:apache-2.0", "region:us" ]
null
2025-04-11T17:22:43Z
--- license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen2.5-14B-Instruct/blob/main/LICENSE base_model: - Qwen/Qwen2.5-14B-Instruct base_model_relation: quantized language: - zho - eng - fra - spa - por - deu - ita - rus - jpn - kor - vie - tha - ara --- # Qwen2.5-14B-Instruct-int4-ov * Model creator: [Qwen](https://huggingface.co/Qwen) * Original model: [Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) ## Description This is [Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) model converted to the [OpenVINO™ IR](https://docs.openvino.ai/2025/documentation/openvino-ir-format.html) (Intermediate Representation) format with weights compressed to INT4 by [NNCF](https://github.com/openvinotoolkit/nncf). ## Quantization Parameters Weight compression was performed using `nncf.compress_weights` with the following parameters: * mode: **INT4_ASYM** * ratio: **1** * group_size: **128** For more information on quantization, check the [OpenVINO model optimization guide](https://docs.openvino.ai/2025/openvino-workflow/model-optimization-guide/weight-compression.html). ## Compatibility The provided OpenVINO™ IR model is compatible with: * OpenVINO version 2025.1.0 and higher * Optimum Intel 1.24.0 and higher ## Running Model Inference with [Optimum Intel](https://huggingface.co/docs/optimum/intel/index) 1. Install packages required for using [Optimum Intel](https://huggingface.co/docs/optimum/intel/index) integration with the OpenVINO backend: ``` pip install optimum[openvino] ``` 2. Run model inference: ``` from transformers import AutoTokenizer from optimum.intel.openvino import OVModelForCausalLM model_id = "OpenVINO/qwen2.5-14b-instruct-int4-ov" tokenizer = AutoTokenizer.from_pretrained(model_id) model = OVModelForCausalLM.from_pretrained(model_id) inputs = tokenizer("What is OpenVINO?", return_tensors="pt") outputs = model.generate(**inputs, max_length=200) text = tokenizer.batch_decode(outputs)[0] print(text) ``` For more examples and possible optimizations, refer to the [Inference with Optimum Intel](https://docs.openvino.ai/2025/openvino-workflow-generative/inference-with-optimum-intel.html). ## Running Model Inference with [OpenVINO GenAI](https://github.com/openvinotoolkit/openvino.genai) 1. Install packages required for using OpenVINO GenAI. ``` pip install openvino-genai huggingface_hub ``` 2. Download model from HuggingFace Hub ``` import huggingface_hub as hf_hub model_id = "OpenVINO/qwen2.5-14b-instruct-int4-ov" model_path = "qwen2.5-14b-instruct-int4-ov" hf_hub.snapshot_download(model_id, local_dir=model_path) ``` 3. Run model inference: ``` import openvino_genai as ov_genai device = "CPU" pipe = ov_genai.LLMPipeline(model_path, device) print(pipe.generate("What is OpenVINO?", max_length=200)) ``` More GenAI usage examples can be found in OpenVINO GenAI library [docs](https://docs.openvino.ai/2025/openvino-workflow-generative/inference-with-genai.html) and [samples](https://github.com/openvinotoolkit/openvino.genai?tab=readme-ov-file#openvino-genai-samples) You can find more detaild usage examples in OpenVINO Notebooks: - [LLM](https://openvinotoolkit.github.io/openvino_notebooks/?search=LLM) - [RAG text generation](https://openvinotoolkit.github.io/openvino_notebooks/?search=RAG+system&tasks=Text+Generation) - [Convert models from ModelScope to OpenVINO](https://openvinotoolkit.github.io/openvino_notebooks/?search=Convert+models+from+ModelScope+to+OpenVINO) ## Limitations Check the original [model card](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) for limitations. ## Legal information The original model is distributed under [Apache License Version 2.0](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct/blob/main/LICENSE) license. More details can be found in [Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct). ## Disclaimer Intel is committed to respecting human rights and avoiding causing or contributing to adverse impacts on human rights. See [Intel’s Global Human Rights Principles](https://www.intel.com/content/dam/www/central-libraries/us/en/documents/policy-human-rights.pdf). Intel’s products and software are intended only to be used in applications that do not cause or contribute to adverse impacts on human rights.
Alcoft/Qwen2.5-7B-Instruct-GGUF
Alcoft
2025-04-28T05:34:48Z
22
0
null
[ "gguf", "text-generation", "zho", "eng", "fra", "spa", "por", "deu", "ita", "rus", "jpn", "kor", "vie", "tha", "ara", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:quantized:Qwen/Qwen2.5-7B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-12-01T01:08:44Z
--- license: apache-2.0 language: - zho - eng - fra - spa - por - deu - ita - rus - jpn - kor - vie - tha - ara base_model: - Qwen/Qwen2.5-7B-Instruct pipeline_tag: text-generation ---
Triangle104/Qwen2.5-3B-Q4_K_S-GGUF
Triangle104
2025-04-28T05:34:45Z
2
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "text-generation", "zho", "eng", "fra", "spa", "por", "deu", "ita", "rus", "jpn", "kor", "vie", "tha", "ara", "base_model:Qwen/Qwen2.5-3B", "base_model:quantized:Qwen/Qwen2.5-3B", "license:other", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-09-19T17:07:06Z
--- base_model: Qwen/Qwen2.5-3B language: - zho - eng - fra - spa - por - deu - ita - rus - jpn - kor - vie - tha - ara license: other license_name: qwen-research license_link: https://huggingface.co/Qwen/Qwen2.5-3B/blob/main/LICENSE pipeline_tag: text-generation tags: - llama-cpp - gguf-my-repo --- # Triangle104/Qwen2.5-3B-Q4_K_S-GGUF This model was converted to GGUF format from [`Qwen/Qwen2.5-3B`](https://huggingface.co/Qwen/Qwen2.5-3B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-3B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Qwen2.5-3B-Q4_K_S-GGUF --hf-file qwen2.5-3b-q4_k_s.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Qwen2.5-3B-Q4_K_S-GGUF --hf-file qwen2.5-3b-q4_k_s.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Qwen2.5-3B-Q4_K_S-GGUF --hf-file qwen2.5-3b-q4_k_s.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Qwen2.5-3B-Q4_K_S-GGUF --hf-file qwen2.5-3b-q4_k_s.gguf -c 2048 ```
Triangle104/Qwen2.5-3B-Q6_K-GGUF
Triangle104
2025-04-28T05:34:11Z
2
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "text-generation", "zho", "eng", "fra", "spa", "por", "deu", "ita", "rus", "jpn", "kor", "vie", "tha", "ara", "base_model:Qwen/Qwen2.5-3B", "base_model:quantized:Qwen/Qwen2.5-3B", "license:other", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-09-22T16:58:21Z
--- base_model: Qwen/Qwen2.5-3B language: - zho - eng - fra - spa - por - deu - ita - rus - jpn - kor - vie - tha - ara license: other license_name: qwen-research license_link: https://huggingface.co/Qwen/Qwen2.5-3B/blob/main/LICENSE pipeline_tag: text-generation tags: - llama-cpp - gguf-my-repo --- # Triangle104/Qwen2.5-3B-Q6_K-GGUF This model was converted to GGUF format from [`Qwen/Qwen2.5-3B`](https://huggingface.co/Qwen/Qwen2.5-3B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-3B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Qwen2.5-3B-Q6_K-GGUF --hf-file qwen2.5-3b-q6_k.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Qwen2.5-3B-Q6_K-GGUF --hf-file qwen2.5-3b-q6_k.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Qwen2.5-3B-Q6_K-GGUF --hf-file qwen2.5-3b-q6_k.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Qwen2.5-3B-Q6_K-GGUF --hf-file qwen2.5-3b-q6_k.gguf -c 2048 ```
Triangle104/Qwen2.5-3B-Q8_0-GGUF
Triangle104
2025-04-28T05:34:02Z
3
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "text-generation", "zho", "eng", "fra", "spa", "por", "deu", "ita", "rus", "jpn", "kor", "vie", "tha", "ara", "base_model:Qwen/Qwen2.5-3B", "base_model:quantized:Qwen/Qwen2.5-3B", "license:other", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-09-22T17:00:11Z
--- base_model: Qwen/Qwen2.5-3B language: - zho - eng - fra - spa - por - deu - ita - rus - jpn - kor - vie - tha - ara license: other license_name: qwen-research license_link: https://huggingface.co/Qwen/Qwen2.5-3B/blob/main/LICENSE pipeline_tag: text-generation tags: - llama-cpp - gguf-my-repo --- # Triangle104/Qwen2.5-3B-Q8_0-GGUF This model was converted to GGUF format from [`Qwen/Qwen2.5-3B`](https://huggingface.co/Qwen/Qwen2.5-3B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-3B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Qwen2.5-3B-Q8_0-GGUF --hf-file qwen2.5-3b-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Qwen2.5-3B-Q8_0-GGUF --hf-file qwen2.5-3b-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Qwen2.5-3B-Q8_0-GGUF --hf-file qwen2.5-3b-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Qwen2.5-3B-Q8_0-GGUF --hf-file qwen2.5-3b-q8_0.gguf -c 2048 ```
Triangle104/Qwen2.5-7B-Q5_K_S-GGUF
Triangle104
2025-04-28T05:32:53Z
3
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "text-generation", "zho", "eng", "fra", "spa", "por", "deu", "ita", "rus", "jpn", "kor", "vie", "tha", "ara", "base_model:Qwen/Qwen2.5-7B", "base_model:quantized:Qwen/Qwen2.5-7B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-09-19T16:05:08Z
--- base_model: Qwen/Qwen2.5-7B language: - zho - eng - fra - spa - por - deu - ita - rus - jpn - kor - vie - tha - ara license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen2.5-7B/blob/main/LICENSE pipeline_tag: text-generation tags: - llama-cpp - gguf-my-repo --- # Triangle104/Qwen2.5-7B-Q5_K_S-GGUF This model was converted to GGUF format from [`Qwen/Qwen2.5-7B`](https://huggingface.co/Qwen/Qwen2.5-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-7B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Qwen2.5-7B-Q5_K_S-GGUF --hf-file qwen2.5-7b-q5_k_s.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Qwen2.5-7B-Q5_K_S-GGUF --hf-file qwen2.5-7b-q5_k_s.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Qwen2.5-7B-Q5_K_S-GGUF --hf-file qwen2.5-7b-q5_k_s.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Qwen2.5-7B-Q5_K_S-GGUF --hf-file qwen2.5-7b-q5_k_s.gguf -c 2048 ```
Triangle104/Qwen2.5-14B-Q6_K-GGUF
Triangle104
2025-04-28T05:31:50Z
10
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "text-generation", "zho", "eng", "fra", "spa", "por", "deu", "ita", "rus", "jpn", "kor", "vie", "tha", "ara", "base_model:Qwen/Qwen2.5-14B", "base_model:quantized:Qwen/Qwen2.5-14B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-09-19T14:58:15Z
--- base_model: Qwen/Qwen2.5-14B language: - zho - eng - fra - spa - por - deu - ita - rus - jpn - kor - vie - tha - ara license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen2.5-14B/blob/main/LICENSE pipeline_tag: text-generation tags: - llama-cpp - gguf-my-repo --- # Triangle104/Qwen2.5-14B-Q6_K-GGUF This model was converted to GGUF format from [`Qwen/Qwen2.5-14B`](https://huggingface.co/Qwen/Qwen2.5-14B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-14B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Qwen2.5-14B-Q6_K-GGUF --hf-file qwen2.5-14b-q6_k.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Qwen2.5-14B-Q6_K-GGUF --hf-file qwen2.5-14b-q6_k.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Qwen2.5-14B-Q6_K-GGUF --hf-file qwen2.5-14b-q6_k.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Qwen2.5-14B-Q6_K-GGUF --hf-file qwen2.5-14b-q6_k.gguf -c 2048 ```
Triangle104/Qwen2.5-32B-Instruct-Q3_K_S-GGUF
Triangle104
2025-04-28T05:31:20Z
4
0
transformers
[ "transformers", "gguf", "chat", "llama-cpp", "gguf-my-repo", "text-generation", "zho", "eng", "fra", "spa", "por", "deu", "ita", "rus", "jpn", "kor", "vie", "tha", "ara", "arxiv:2407.10671", "base_model:Qwen/Qwen2.5-32B-Instruct", "base_model:quantized:Qwen/Qwen2.5-32B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-12-29T14:13:41Z
--- license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen2.5-32B-Instruct/blob/main/LICENSE language: - zho - eng - fra - spa - por - deu - ita - rus - jpn - kor - vie - tha - ara pipeline_tag: text-generation base_model: Qwen/Qwen2.5-32B-Instruct tags: - chat - llama-cpp - gguf-my-repo library_name: transformers --- # Triangle104/Qwen2.5-32B-Instruct-Q3_K_S-GGUF This model was converted to GGUF format from [`Qwen/Qwen2.5-32B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) for more details on the model. --- Model Details: - Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2: Significantly more knowledge and has greatly improved capabilities in coding and mathematics, thanks to our specialized expert models in these domains. Significant improvements in instruction following, generating long texts (over 8K tokens), understanding structured data (e.g, tables), and generating structured outputs especially JSON. More resilient to the diversity of system prompts, enhancing role-play implementation and condition-setting for chatbots. Long-context Support up to 128K tokens and can generate up to 8K tokens. Multilingual support for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more. This repo contains the instruction-tuned 32B Qwen2.5 model, which has the following features: Type: Causal Language Models Training Stage: Pretraining & Post-training Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias Number of Parameters: 32.5B Number of Paramaters (Non-Embedding): 31.0B Number of Layers: 64 Number of Attention Heads (GQA): 40 for Q and 8 for KV Context Length: Full 131,072 tokens and generation 8192 tokens Please refer to this section for detailed instructions on how to deploy Qwen2.5 for handling long texts. For more details, please refer to our blog, GitHub, and Documentation. Requirements The code of Qwen2.5 has been in the latest Hugging face transformers and we advise you to use the latest version of transformers. With transformers<4.37.0, you will encounter the following error: KeyError: 'qwen2' Quickstart Here provides a code snippet with apply_chat_template to show you how to load the tokenizer and model and how to generate contents. from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "Qwen/Qwen2.5-32B-Instruct" model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained(model_name) prompt = "Give me a short introduction to large language model." messages = [ {"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate( **model_inputs, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] Processing Long Texts The current config.json is set for context length up to 32,768 tokens. To handle extensive inputs exceeding 32,768 tokens, we utilize YaRN, a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts. For supported frameworks, you could add the following to config.json to enable YaRN: { ..., "rope_scaling": { "factor": 4.0, "original_max_position_embeddings": 32768, "type": "yarn" } } For deployment, we recommend using vLLM. Please refer to our Documentation for usage if you are not familar with vLLM. Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, potentially impacting performance on shorter texts. We advise adding the rope_scaling configuration only when processing long contexts is required. Evaluation & Performance Detailed evaluation results are reported in this 📑 blog. For requirements on GPU memory and the respective throughput, see results here. Citation If you find our work helpful, feel free to give us a cite. @misc{qwen2.5, title = {Qwen2.5: A Party of Foundation Models}, url = {https://qwenlm.github.io/blog/qwen2.5/}, author = {Qwen Team}, month = {September}, year = {2024} } @article{qwen2, title={Qwen2 Technical Report}, author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan}, journal={arXiv preprint arXiv:2407.10671}, year={2024} } --- ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Qwen2.5-32B-Instruct-Q3_K_S-GGUF --hf-file qwen2.5-32b-instruct-q3_k_s.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Qwen2.5-32B-Instruct-Q3_K_S-GGUF --hf-file qwen2.5-32b-instruct-q3_k_s.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Qwen2.5-32B-Instruct-Q3_K_S-GGUF --hf-file qwen2.5-32b-instruct-q3_k_s.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Qwen2.5-32B-Instruct-Q3_K_S-GGUF --hf-file qwen2.5-32b-instruct-q3_k_s.gguf -c 2048 ```
Triangle104/Qwen2.5-32B-Instruct-Q4_K_S-GGUF
Triangle104
2025-04-28T05:30:57Z
3
1
transformers
[ "transformers", "gguf", "chat", "llama-cpp", "gguf-my-repo", "text-generation", "zho", "eng", "fra", "spa", "por", "deu", "ita", "rus", "jpn", "kor", "vie", "tha", "ara", "arxiv:2407.10671", "base_model:Qwen/Qwen2.5-32B-Instruct", "base_model:quantized:Qwen/Qwen2.5-32B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-12-29T14:48:51Z
--- license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen2.5-32B-Instruct/blob/main/LICENSE language: - zho - eng - fra - spa - por - deu - ita - rus - jpn - kor - vie - tha - ara pipeline_tag: text-generation base_model: Qwen/Qwen2.5-32B-Instruct tags: - chat - llama-cpp - gguf-my-repo library_name: transformers --- # Triangle104/Qwen2.5-32B-Instruct-Q4_K_S-GGUF This model was converted to GGUF format from [`Qwen/Qwen2.5-32B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) for more details on the model. --- Model Details: - Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2: Significantly more knowledge and has greatly improved capabilities in coding and mathematics, thanks to our specialized expert models in these domains. Significant improvements in instruction following, generating long texts (over 8K tokens), understanding structured data (e.g, tables), and generating structured outputs especially JSON. More resilient to the diversity of system prompts, enhancing role-play implementation and condition-setting for chatbots. Long-context Support up to 128K tokens and can generate up to 8K tokens. Multilingual support for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more. This repo contains the instruction-tuned 32B Qwen2.5 model, which has the following features: Type: Causal Language Models Training Stage: Pretraining & Post-training Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias Number of Parameters: 32.5B Number of Paramaters (Non-Embedding): 31.0B Number of Layers: 64 Number of Attention Heads (GQA): 40 for Q and 8 for KV Context Length: Full 131,072 tokens and generation 8192 tokens Please refer to this section for detailed instructions on how to deploy Qwen2.5 for handling long texts. For more details, please refer to our blog, GitHub, and Documentation. Requirements The code of Qwen2.5 has been in the latest Hugging face transformers and we advise you to use the latest version of transformers. With transformers<4.37.0, you will encounter the following error: KeyError: 'qwen2' Quickstart Here provides a code snippet with apply_chat_template to show you how to load the tokenizer and model and how to generate contents. from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "Qwen/Qwen2.5-32B-Instruct" model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained(model_name) prompt = "Give me a short introduction to large language model." messages = [ {"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate( **model_inputs, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] Processing Long Texts The current config.json is set for context length up to 32,768 tokens. To handle extensive inputs exceeding 32,768 tokens, we utilize YaRN, a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts. For supported frameworks, you could add the following to config.json to enable YaRN: { ..., "rope_scaling": { "factor": 4.0, "original_max_position_embeddings": 32768, "type": "yarn" } } For deployment, we recommend using vLLM. Please refer to our Documentation for usage if you are not familar with vLLM. Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, potentially impacting performance on shorter texts. We advise adding the rope_scaling configuration only when processing long contexts is required. Evaluation & Performance Detailed evaluation results are reported in this 📑 blog. For requirements on GPU memory and the respective throughput, see results here. Citation If you find our work helpful, feel free to give us a cite. @misc{qwen2.5, title = {Qwen2.5: A Party of Foundation Models}, url = {https://qwenlm.github.io/blog/qwen2.5/}, author = {Qwen Team}, month = {September}, year = {2024} } @article{qwen2, title={Qwen2 Technical Report}, author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan}, journal={arXiv preprint arXiv:2407.10671}, year={2024} } --- ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Qwen2.5-32B-Instruct-Q4_K_S-GGUF --hf-file qwen2.5-32b-instruct-q4_k_s.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Qwen2.5-32B-Instruct-Q4_K_S-GGUF --hf-file qwen2.5-32b-instruct-q4_k_s.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Qwen2.5-32B-Instruct-Q4_K_S-GGUF --hf-file qwen2.5-32b-instruct-q4_k_s.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Qwen2.5-32B-Instruct-Q4_K_S-GGUF --hf-file qwen2.5-32b-instruct-q4_k_s.gguf -c 2048 ```
Triangle104/Qwen2.5-32B-Instruct-Q4_K_M-GGUF
Triangle104
2025-04-28T05:30:49Z
2
0
transformers
[ "transformers", "gguf", "chat", "llama-cpp", "gguf-my-repo", "text-generation", "zho", "eng", "fra", "spa", "por", "deu", "ita", "rus", "jpn", "kor", "vie", "tha", "ara", "arxiv:2407.10671", "base_model:Qwen/Qwen2.5-32B-Instruct", "base_model:quantized:Qwen/Qwen2.5-32B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-12-29T15:02:24Z
--- license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen2.5-32B-Instruct/blob/main/LICENSE language: - zho - eng - fra - spa - por - deu - ita - rus - jpn - kor - vie - tha - ara pipeline_tag: text-generation base_model: Qwen/Qwen2.5-32B-Instruct tags: - chat - llama-cpp - gguf-my-repo library_name: transformers --- # Triangle104/Qwen2.5-32B-Instruct-Q4_K_M-GGUF This model was converted to GGUF format from [`Qwen/Qwen2.5-32B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) for more details on the model. --- Model Details: - Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2: Significantly more knowledge and has greatly improved capabilities in coding and mathematics, thanks to our specialized expert models in these domains. Significant improvements in instruction following, generating long texts (over 8K tokens), understanding structured data (e.g, tables), and generating structured outputs especially JSON. More resilient to the diversity of system prompts, enhancing role-play implementation and condition-setting for chatbots. Long-context Support up to 128K tokens and can generate up to 8K tokens. Multilingual support for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more. This repo contains the instruction-tuned 32B Qwen2.5 model, which has the following features: Type: Causal Language Models Training Stage: Pretraining & Post-training Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias Number of Parameters: 32.5B Number of Paramaters (Non-Embedding): 31.0B Number of Layers: 64 Number of Attention Heads (GQA): 40 for Q and 8 for KV Context Length: Full 131,072 tokens and generation 8192 tokens Please refer to this section for detailed instructions on how to deploy Qwen2.5 for handling long texts. For more details, please refer to our blog, GitHub, and Documentation. Requirements The code of Qwen2.5 has been in the latest Hugging face transformers and we advise you to use the latest version of transformers. With transformers<4.37.0, you will encounter the following error: KeyError: 'qwen2' Quickstart Here provides a code snippet with apply_chat_template to show you how to load the tokenizer and model and how to generate contents. from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "Qwen/Qwen2.5-32B-Instruct" model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained(model_name) prompt = "Give me a short introduction to large language model." messages = [ {"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate( **model_inputs, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] Processing Long Texts The current config.json is set for context length up to 32,768 tokens. To handle extensive inputs exceeding 32,768 tokens, we utilize YaRN, a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts. For supported frameworks, you could add the following to config.json to enable YaRN: { ..., "rope_scaling": { "factor": 4.0, "original_max_position_embeddings": 32768, "type": "yarn" } } For deployment, we recommend using vLLM. Please refer to our Documentation for usage if you are not familar with vLLM. Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, potentially impacting performance on shorter texts. We advise adding the rope_scaling configuration only when processing long contexts is required. Evaluation & Performance Detailed evaluation results are reported in this 📑 blog. For requirements on GPU memory and the respective throughput, see results here. Citation If you find our work helpful, feel free to give us a cite. @misc{qwen2.5, title = {Qwen2.5: A Party of Foundation Models}, url = {https://qwenlm.github.io/blog/qwen2.5/}, author = {Qwen Team}, month = {September}, year = {2024} } @article{qwen2, title={Qwen2 Technical Report}, author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan}, journal={arXiv preprint arXiv:2407.10671}, year={2024} } --- ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Qwen2.5-32B-Instruct-Q4_K_M-GGUF --hf-file qwen2.5-32b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Qwen2.5-32B-Instruct-Q4_K_M-GGUF --hf-file qwen2.5-32b-instruct-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Qwen2.5-32B-Instruct-Q4_K_M-GGUF --hf-file qwen2.5-32b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Qwen2.5-32B-Instruct-Q4_K_M-GGUF --hf-file qwen2.5-32b-instruct-q4_k_m.gguf -c 2048 ```
Triangle104/Qwen2.5-32B-Instruct-Q5_K_M-GGUF
Triangle104
2025-04-28T05:29:19Z
2
0
transformers
[ "transformers", "gguf", "chat", "llama-cpp", "gguf-my-repo", "text-generation", "zho", "eng", "fra", "spa", "por", "deu", "ita", "rus", "jpn", "kor", "vie", "tha", "ara", "arxiv:2407.10671", "base_model:Qwen/Qwen2.5-32B-Instruct", "base_model:quantized:Qwen/Qwen2.5-32B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-12-29T15:48:03Z
--- license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen2.5-32B-Instruct/blob/main/LICENSE language: - zho - eng - fra - spa - por - deu - ita - rus - jpn - kor - vie - tha - ara pipeline_tag: text-generation base_model: Qwen/Qwen2.5-32B-Instruct tags: - chat - llama-cpp - gguf-my-repo library_name: transformers --- # Triangle104/Qwen2.5-32B-Instruct-Q5_K_M-GGUF This model was converted to GGUF format from [`Qwen/Qwen2.5-32B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) for more details on the model. --- Model Details: - Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2: Significantly more knowledge and has greatly improved capabilities in coding and mathematics, thanks to our specialized expert models in these domains. Significant improvements in instruction following, generating long texts (over 8K tokens), understanding structured data (e.g, tables), and generating structured outputs especially JSON. More resilient to the diversity of system prompts, enhancing role-play implementation and condition-setting for chatbots. Long-context Support up to 128K tokens and can generate up to 8K tokens. Multilingual support for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more. This repo contains the instruction-tuned 32B Qwen2.5 model, which has the following features: Type: Causal Language Models Training Stage: Pretraining & Post-training Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias Number of Parameters: 32.5B Number of Paramaters (Non-Embedding): 31.0B Number of Layers: 64 Number of Attention Heads (GQA): 40 for Q and 8 for KV Context Length: Full 131,072 tokens and generation 8192 tokens Please refer to this section for detailed instructions on how to deploy Qwen2.5 for handling long texts. For more details, please refer to our blog, GitHub, and Documentation. Requirements The code of Qwen2.5 has been in the latest Hugging face transformers and we advise you to use the latest version of transformers. With transformers<4.37.0, you will encounter the following error: KeyError: 'qwen2' Quickstart Here provides a code snippet with apply_chat_template to show you how to load the tokenizer and model and how to generate contents. from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "Qwen/Qwen2.5-32B-Instruct" model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained(model_name) prompt = "Give me a short introduction to large language model." messages = [ {"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate( **model_inputs, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] Processing Long Texts The current config.json is set for context length up to 32,768 tokens. To handle extensive inputs exceeding 32,768 tokens, we utilize YaRN, a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts. For supported frameworks, you could add the following to config.json to enable YaRN: { ..., "rope_scaling": { "factor": 4.0, "original_max_position_embeddings": 32768, "type": "yarn" } } For deployment, we recommend using vLLM. Please refer to our Documentation for usage if you are not familar with vLLM. Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, potentially impacting performance on shorter texts. We advise adding the rope_scaling configuration only when processing long contexts is required. Evaluation & Performance Detailed evaluation results are reported in this 📑 blog. For requirements on GPU memory and the respective throughput, see results here. Citation If you find our work helpful, feel free to give us a cite. @misc{qwen2.5, title = {Qwen2.5: A Party of Foundation Models}, url = {https://qwenlm.github.io/blog/qwen2.5/}, author = {Qwen Team}, month = {September}, year = {2024} } @article{qwen2, title={Qwen2 Technical Report}, author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan}, journal={arXiv preprint arXiv:2407.10671}, year={2024} } --- ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Qwen2.5-32B-Instruct-Q5_K_M-GGUF --hf-file qwen2.5-32b-instruct-q5_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Qwen2.5-32B-Instruct-Q5_K_M-GGUF --hf-file qwen2.5-32b-instruct-q5_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Qwen2.5-32B-Instruct-Q5_K_M-GGUF --hf-file qwen2.5-32b-instruct-q5_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Qwen2.5-32B-Instruct-Q5_K_M-GGUF --hf-file qwen2.5-32b-instruct-q5_k_m.gguf -c 2048 ```
Triangle104/Qwen2.5-14B-Instruct-Q4_K_S-GGUF
Triangle104
2025-04-28T05:28:52Z
6
0
null
[ "gguf", "chat", "llama-cpp", "gguf-my-repo", "text-generation", "zho", "eng", "fra", "spa", "por", "deu", "ita", "rus", "jpn", "kor", "vie", "tha", "ara", "base_model:Qwen/Qwen2.5-14B-Instruct", "base_model:quantized:Qwen/Qwen2.5-14B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-09-19T11:30:08Z
--- base_model: Qwen/Qwen2.5-14B-Instruct language: - zho - eng - fra - spa - por - deu - ita - rus - jpn - kor - vie - tha - ara license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen2.5-14B-Instruct/blob/main/LICENSE pipeline_tag: text-generation tags: - chat - llama-cpp - gguf-my-repo --- # Triangle104/Qwen2.5-14B-Instruct-Q4_K_S-GGUF This model was converted to GGUF format from [`Qwen/Qwen2.5-14B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Qwen2.5-14B-Instruct-Q4_K_S-GGUF --hf-file qwen2.5-14b-instruct-q4_k_s.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Qwen2.5-14B-Instruct-Q4_K_S-GGUF --hf-file qwen2.5-14b-instruct-q4_k_s.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Qwen2.5-14B-Instruct-Q4_K_S-GGUF --hf-file qwen2.5-14b-instruct-q4_k_s.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Qwen2.5-14B-Instruct-Q4_K_S-GGUF --hf-file qwen2.5-14b-instruct-q4_k_s.gguf -c 2048 ```
Triangle104/Qwen2.5-14B-Instruct-Q5_K_M-GGUF
Triangle104
2025-04-28T05:28:23Z
13
0
null
[ "gguf", "chat", "llama-cpp", "gguf-my-repo", "text-generation", "zho", "eng", "fra", "spa", "por", "deu", "ita", "rus", "jpn", "kor", "vie", "tha", "ara", "base_model:Qwen/Qwen2.5-14B-Instruct", "base_model:quantized:Qwen/Qwen2.5-14B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-09-19T12:18:17Z
--- base_model: Qwen/Qwen2.5-14B-Instruct language: - zho - eng - fra - spa - por - deu - ita - rus - jpn - kor - vie - tha - ara license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen2.5-14B-Instruct/blob/main/LICENSE pipeline_tag: text-generation tags: - chat - llama-cpp - gguf-my-repo --- # Triangle104/Qwen2.5-14B-Instruct-Q5_K_M-GGUF This model was converted to GGUF format from [`Qwen/Qwen2.5-14B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Qwen2.5-14B-Instruct-Q5_K_M-GGUF --hf-file qwen2.5-14b-instruct-q5_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Qwen2.5-14B-Instruct-Q5_K_M-GGUF --hf-file qwen2.5-14b-instruct-q5_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Qwen2.5-14B-Instruct-Q5_K_M-GGUF --hf-file qwen2.5-14b-instruct-q5_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Qwen2.5-14B-Instruct-Q5_K_M-GGUF --hf-file qwen2.5-14b-instruct-q5_k_m.gguf -c 2048 ```
TOMFORD79/S8
TOMFORD79
2025-04-28T05:27:27Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-04-28T04:02:49Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
mlfoundations-dev/d1_science_gpt_1k
mlfoundations-dev
2025-04-28T05:26:11Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "qwen2", "text-generation", "llama-factory", "full", "generated_from_trainer", "conversational", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-7B-Instruct", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-28T05:23:28Z
--- library_name: transformers license: other base_model: Qwen/Qwen2.5-7B-Instruct tags: - llama-factory - full - generated_from_trainer model-index: - name: d1_science_gpt_1k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # d1_science_gpt_1k This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/d1_science_gpt_1k dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 16 - gradient_accumulation_steps: 6 - total_train_batch_size: 96 - total_eval_batch_size: 128 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 7.0 ### Training results ### Framework versions - Transformers 4.46.1 - Pytorch 2.6.0a0+ecf3bae40a.nv25.01 - Datasets 3.5.0 - Tokenizers 0.20.3
Triangle104/Qwen2.5-7B-Instruct-Q4_K_M-GGUF
Triangle104
2025-04-28T05:26:03Z
2
0
null
[ "gguf", "chat", "llama-cpp", "gguf-my-repo", "text-generation", "zho", "eng", "fra", "spa", "por", "deu", "ita", "rus", "jpn", "kor", "vie", "tha", "ara", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:quantized:Qwen/Qwen2.5-7B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-09-19T15:12:27Z
--- base_model: Qwen/Qwen2.5-7B-Instruct language: - zho - eng - fra - spa - por - deu - ita - rus - jpn - kor - vie - tha - ara license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen2.5-7B-Instruct/blob/main/LICENSE pipeline_tag: text-generation tags: - chat - llama-cpp - gguf-my-repo --- # Triangle104/Qwen2.5-7B-Instruct-Q4_K_M-GGUF This model was converted to GGUF format from [`Qwen/Qwen2.5-7B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Qwen2.5-7B-Instruct-Q4_K_M-GGUF --hf-file qwen2.5-7b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Qwen2.5-7B-Instruct-Q4_K_M-GGUF --hf-file qwen2.5-7b-instruct-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Qwen2.5-7B-Instruct-Q4_K_M-GGUF --hf-file qwen2.5-7b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Qwen2.5-7B-Instruct-Q4_K_M-GGUF --hf-file qwen2.5-7b-instruct-q4_k_m.gguf -c 2048 ```
belyakoff/puzzle-search-model
belyakoff
2025-04-28T05:20:43Z
0
0
sentence-transformers
[ "sentence-transformers", "pytorch", "safetensors", "xlm-roberta", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:1413", "loss:GISTEmbedLoss", "arxiv:1908.10084", "arxiv:2402.16829", "base_model:intfloat/multilingual-e5-large-instruct", "base_model:finetune:intfloat/multilingual-e5-large-instruct", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-04-28T03:51:03Z
--- tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:1413 - loss:GISTEmbedLoss base_model: intfloat/multilingual-e5-large-instruct widget: - source_sentence: '1. проверить , запущен ли процесс fox. Если запущен, закрыть 2. открыть страницу в браузере. Адрес: avito.ru. Если будут напоминания для пользователя — согласится на все. 3. навести мышь на меню Каталоги, дождаться появления подменю и навести мышь на меню «каталог автомобилей». Кликнуть мышкой 4. найти все слова, которые выделены тэгом <h3>. Из списка сделать словарь, ключ - текст, значение — другие параметры в тэге. 5. преобразовать словарь в датафрейм 6. сгруппировать данные датафрейма. Если есть повторы в ключах, сделать один ключ, но с объединенным значением 7. выгрузить датафрем обратно в словарь. 8. Отправить сообщение в whatsapp об удачном завершении процесса' sentences: - 'Уведомление пользователя. describe: Выводит на экран всплывающее окно с указанным текстом. Приостанавливает работу алгаритма до нажатия ''ok''. Окно закрывается по нажатию кнопки ''ok''..Блок выводит на экран всплывающее окно с указанным текстом. Окно закрывается по нажатию кнопки ‘Закрыть’. Выполнение алгоритма приостанавливается во время отображения уведомления. Чтобы скопировать содержимое уведомления, необходимо нажать кнопку “Скопировать”..Показать сообщений пользователю. Показывает в отдельном окне.' - "Добавить строку в DataFrame. describe: Добавляет строку в dataframe по номеру..Блок\ \ позволяет добавить, перезаписать и удалить строку в DataFrame. При этом необходимо\ \ указать сам DataFrame, строку для добавления/перезаписи и номер позиции..поскольку\ \ DataFrame (датафрейм) это таблица, к ней можно добавить строчку, удалить строчку\ \ или изменить строчку\nПри добавлении строки в датафрейм, нужно указать в какое\ \ место ее нужно поместить. \nПри удалении строки тоже нужно указать номер удаляемой\ \ строки.\nИ при изменении строки тоже указывается номер строки\nПри добавлении\ \ или изменении строки указывается список значений всех колонок\nПример\nДобавить\ \ строку в датафрейм dataframe\nУдалить строку из таблицы\nИзменить строку в датафрейме" - 'Запущен ли процесс. describe: Проверяет, запущен ли указанный процесс. Если хотя бы один экземпляр процесса запущен, то вернет True, иначе - False..Блок проверяет, запущен ли указанный процесс. Если хотя бы один экземпляр процесса запущен, то вернет “истина”, иначе вернет “ложь”.."Запущен ли процесс" — это вопрос, касающийся текущего состояния процесса в операционной системе, обозначающий его активность или присутствие в системе. Процесс считается запущенным, если он был инициирован и выполняется в данный момент времени. Для проверки, запущен ли процесс, операционная система использует таблицы процессов, где каждый процесс имеет свой уникальный идентификатор (PID) и информацию о его статусе. Когда процесс находится в активном состоянии, он использует ресурсы системы, такие как CPU и память, для выполнения своих задач. Проверку статуса процесса можно осуществить с помощью различных инструментов, например, с помощью командной строки или графических интерфейсов, отображающих активные процессы.' - source_sentence: "1. открыть Excel\n2. перейти на страницу «Итого»\n3. переместиться\ \ на кнопку «Стрелка вправо»\n4. прокрутить колесико мышки ровно 4 секунды, со\ \ скоростью 30 пикселей в секунду\n5. дальше выбор. Если в центре экрана видна\ \ печать NASA (пример есть в файле nasa.logo), то в переменную param записать\ \ Истина\n6. если в ровно в центре экрана нет печати NASA, то в переменную param\ \ записать None (не путать с ложь)\n7. если в переменной param записано None,\ \ найти в папке dir файл Roscosmos.data.\n8. открыть , прочитать все строки. Преобразовать\ \ строки в таблицу. Поставить фильтр на первой колонке , равенство, Роскосомос.\ \ \n9. полученную таблицу распечатать на принтере Printer1" sentences: - 'Удалить дубликаты. describe: Удаляет повторяющиеся элементы списка. Возвращает список уникальных элементов..Блок удаляет повторяющиеся элементы списка, возвращает список уникальных элементов..Аналог функции list(set()) в python. Удаляет дубли. Сначала делаем множество, где одинаковые элементы объединяются, а потом из множества делаем список Примеры: Удалить дубли Получить уникальные значение' - "Фильтровать табличные данные. describe: Фильтрует табличные данные по указанному\ \ столбцу и операции..Блок позволяет Фильтровать табличные данные. Необходимо\ \ указать:Таблицу - Путь к файлу или DataFrame источника данных;Столбец или список\ \ столбцов - Столбец или список столбцов для фильтрации;Операция - Операция сравнения\ \ для фильтрации;Значение - Значение или список значений для фильтрации;Движок\ \ обработки данных - Определяет способ использования ресурсов процессора при обработке\ \ таблиц.Для визуальной работы с данными, кликните по кнопке:Визуализация данныхФорма\ \ визуализации данных - встроенный инструмент Puzzle RPA, который позволяет загружать\ \ и просматривать различные наборы данных..датафрейм, как таблица, может быть\ \ использован для быстрой фильтрации. Можем наложить отбор на любую колонку и\ \ посмотреть что останется\nЗначение в колонке , на которую накладывается отбор\ \ может проверяться на:\nРавенству какому либо значению\nНе равенству\nНа меньше\n\ Больше\nЕсли в колонке есть пустые значение\nНаоборот, выбрать те, где заполнено\ \ \nВ списке значений\nПримеры\nОтфильтровать данные по колонке\nОставить только\ \ те строки, где..\nНаложить отбор на датафрейм" - 'Изменить порядок. describe: Блок меняет порядок списка на обратный..Инвертирует строку. Последний символ становится первым. А первый последним. Например Инвертировать строку «полисад». Ответ «дасилоп».' - source_sentence: Скопировать число в файле Excel в столбце "Количество в граммах". Извлечь данные из буфера обмена и выполнить деление этого числа на 1000. Вызвать через командную строку калькулятор и на калькуляторе возвести в квадрат результат деления sentences: - 'Прочитать письма. describe: Считывает письма электронной почты с указанными параметрами..Блок позволяет прочитать письма по IMAP.Требуется указать:Данные почтового аккаунта, который будет прочитан;Адрес сервера;Папку для сохранения вложений из писем.Дополнительно нажатием на “+” можно добавлять следующие параметры:Дату, с которой получать сообщения;Дату, до которой получать сообщения;Отправителя;Получателя;Тему сообщения;Тело сообщения;Подстроку в теме или теле сообщения;ID-сообщения;Наличие вложения;Наличие флага;Получить только не прочитанные;Отметить сообщение прочитанным;Отметить сообщение флагом.Некоторые почтовые сервисы не поддерживают работу всех фильтров..процесс получения и отображения содержания электронного письма, отправленного через почтовую службу, в почтовом клиенте или веб-интерфейсе. Он включает в себя доступ к почтовому ящику, выбор конкретного письма и его открытие для просмотра. При этом письма могут содержать текст, вложенные файлы, изображения и ссылки, которые пользователь может просмотреть. Во время чтения письма происходит декодирование и отображение данных, полученных с почтового сервера. Этот процесс может быть выполнен как на компьютере, так и на мобильных устройствах через специализированные приложения или веб-сайты' - 'Остановить секундомер. describe: Останавливет секундомер и сохраняет результат в переменню.Блок останавливает секундомер и сохраняет результат в переменную. Единица измерения времени - секунды..действие, заключающееся в прекращении отсчёта времени, фиксируемого устройством, предназначенным для измерения интервалов времени. Обычно, секундомер активируется нажатием кнопки, и его остановка происходит также нажатием на соответствующую кнопку или команду. После остановки секундомер фиксирует текущий результат в виде времени, прошедшего с начала отсчёта. Остановка может быть выполнена вручную или автоматически, в зависимости от типа устройства. После остановки можно записать результат или повторить отсчёт времени, начиная новый цикл' - 'Арифметические операции. describe: В блоке есть два паза для добавления чисел, кликнув по текущему условию, можно выбрать операцию, которую требуется произвести с числами. Блок имеет выпадающее меню. Клик по символу раскрывающегося списка открывает следующее меню:В меню представлены следующие опции:+ -возвращает сумму двух чисел;-- возвращает разность двух чисел;×- возвращает произведение двух чисел;÷ -возвращает частное от деления первого числа на второе;^- возвращает первое число, возведенное в степень второго..Нужен для Сложения (+) Вычитания (-) Умножения (*) Деления (/) Возведение в степень (^) двух чисел. Например: Сложить два числа Найти остаток 5*9 = 45 1-8=-7 2:2 = 1 Увеличить число на 8' - source_sentence: '1. подключиться к базе данных Postgres. Параметры подключение взять из глобальных переменных 2. таблица Date, выбрать все даты прошлого года (list1) 3. таблица Numbers, выбрать все числа, которые не делятся на 2 (list2) 4. все даты в list1 преобразовать в строки в формате YyYy:Dd:Hhhh 5. для всех чисел list2 найти остаток от деления на 5. 6. объединить оба списка в один list3. Сохранить список в текстовый файл file.txt 7. проверить, если логин пароль для доступа на сайт my_fork.fr 8. если нет, то добавить с логином ME паролем 123dfg 9. загрузить file.txt на сайт my_fork.fr' sentences: - 'Сделать скриншот. describe: Сохраняет в файл скриншот всего экрана..Блок сохраняет в файл скриншот всего экрана. Требуется указать путь к файлу с указанием названия и расширения файла (.png). Файл будет создан автоматически по указанному пути..процесс создания цифровой копии изображения или изображения и текста, отображаемых на экране компьютера или другого устройства, такого как смартфон или планшет. Эта операция позволяет сохранить текущее состояние дисплея в виде файла, который может быть использован для различных целей, включая демонстрацию ошибок программного обеспечения, сохранение важной информации или обмен изображениями через интернет. Скриншоты обычно сохраняются в форматах изображений, таких как PNG, JPEG или BMP. Для создания скриншота используются встроенные средства операционной системы, специализированное программное обеспечение или горячие клавиши на клавиатуре. Полученные скриншоты могут быть редактированы с помощью графических редакторов для выделения важных элементов или добавления комментариев перед тем, как их использовать' - 'Триггер по письму. describe: Ждет появления определенного сообщения в электронной почте..Блок ожидает появление определенного письма в электронной почте.Требуется указать:Данные почтового аккаунта, который будет прочитан;Адрес сервера;Время ожидания.Дополнительно нажатием на “+” можно добавлять следующие параметры:Отправителя;Получателя;Тему сообщения;Тело сообщения;Подстроку в теме или теле сообщения;Наличие вложения;Отметить сообщение прочитанным;Отметить сообщение флагом.Некоторые почтовые сервисы не поддерживают работу всех фильтров..автоматическое событие или условие, которое активируется при получении нового письма на электронную почту. Этот триггер может быть настроен для различных действий, например, отправки уведомлений, переноса письма в определённую папку или запуска скрипта. Он работает на основе заданных критериев, таких как отправитель, тема письма или ключевые слова в содержимом. Триггер может быть реализован в почтовых клиентах или с помощью серверных автоматизаций, например, через API почтовых сервисов. Основной целью является автоматизация обработки входящих сообщений без необходимости вручную отслеживать каждое письмо' - 'Остаток от деления. describe: Блок возвращает остаток от деления двух чисел..Математическая операция , которая получает остаток от деления двух чисел В python это операция %. Например Найти остаток от деления 15 на 3. Ответ 0 Найти остаток от деления 15 на 10. Ответ 5' - source_sentence: с помощью bash скрипта узнать все рабочие процессы. В цикле начать их обходить. Если процесс начинается на цифру, то остановить его. В файле delete_processes.txt дописать имя закрытого процесса sentences: - 'Переключиться на процесс. describe: Блок позволяет подключиться к запущенному процессу «1С», для дальнейшего взаимодействия с программой..Если толстый клиент 1с открыт, но был свернут, этот блок может вернуть в фокус 1с предприятие. ' - 'Дописать в файл. describe: Дописывает текст в конец указанного текстового или json-файла..Блок дописывает текст в конец указанного текстового или json-файла..В конец текстового файла с расширением txt или json дописать текст Примеры Добавить в текстовый файл Дописать текст в файл' - 'Прочитать из Word. describe: Считывает содержимое указанного документа Word. Возвращает считанные данные в виде строки..Блок считывает содержимое указанного файла Word, Поддерживаемый формат файла - docx. Возвращает строку, в строке содержатся данные форматирования..ворд это текстовый документ, с возможностью форматирования текста. Этот текст можно прочитать в переменную и потом обрабатывать текст. Укажите путь к word файлу и файл будет прочитан Примеры Прочитать ворд Получить текст из word файла' pipeline_tag: sentence-similarity library_name: sentence-transformers --- # SentenceTransformer based on intfloat/multilingual-e5-large-instruct This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [intfloat/multilingual-e5-large-instruct](https://huggingface.co/intfloat/multilingual-e5-large-instruct). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [intfloat/multilingual-e5-large-instruct](https://huggingface.co/intfloat/multilingual-e5-large-instruct) <!-- at revision 84344a23ee1820ac951bc365f1e91d094a911763 --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 1024 dimensions - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("belyakoff/puzzle-search-model") # Run inference sentences = [ 'с помощью bash скрипта узнать все рабочие процессы. В цикле начать их обходить. Если процесс начинается на цифру, то остановить его. В файле delete_processes.txt дописать имя закрытого процесса', 'Дописать в файл. describe: Дописывает текст в конец указанного текстового или json-файла..Блок дописывает текст в конец указанного текстового или json-файла..В конец текстового файла с расширением txt или json дописать текст\nПримеры\nДобавить в текстовый файл\nДописать текст в файл', 'Переключиться на процесс. describe: Блок позволяет подключиться к запущенному процессу «1С», для дальнейшего взаимодействия с программой..Если толстый клиент 1с открыт, но был свернут, этот блок может вернуть в фокус 1с предприятие. ', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 1024] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 1,413 training samples * Columns: <code>anchor</code> and <code>positive</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | |:--------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 15 tokens</li><li>mean: 82.45 tokens</li><li>max: 326 tokens</li></ul> | <ul><li>min: 42 tokens</li><li>mean: 216.75 tokens</li><li>max: 512 tokens</li></ul> | * Samples: | anchor | positive | |:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>1. Авторизация в 1С-веб<br>2. Переключиться на страницу Файлы. Перейти в раздел документации.<br>3. Скачивание файла, сохранение на диск с проверкой SSL-сертификата, если это предусмотрено параметром `ssl_verify`.<br>4. Применение алгоритма сжатия к PDF-файлу, с конвертацией в оттенки серого и сохранением текстового слоя.<br>5. Сохранения сжатого файла в заданную директорию по пути `directory_path`.<br>6. уведомление об успешном скачивании, сжатии PDF-файла и сохранении с указанием размеров файлов до и после операции сжатия.</code> | <code>Добавить фильтр 1С-веб. describe: Позволяет выбрать один или несколько вариантов для открытия страницы/переключения на страницу..Блок “Добавить фильтр 1С-веб” позволяет выбрать один или несколько вариантов для открытия страницы/переключения на страницу:Ссылка на страницу;Название страницы равно;Название страницы содержит;Название страницы не содержит.В разъем следует поместить текстовый блок с искомым названием/ссылкой..Открыть документ, справочник, отчет или любую другую форму в 1с предприятии в браузере. Нужно указать или навигационную ссылку или название формы<br></code> | | <code>1. открыть 1с<br>2. авторизоваться в 1с<br>3. открыть пункт меню Инструкции 2025 с помощью блока поиска. В поле имя указать «содержит» «Инструкции + currentYear()»<br>4. Нажать кнопку открыть и скачать последний файл<br>5. Открыть файл<br>6. Перевернуть страницу, если ориентация не равна 0 градусов<br>7. если файл был изменен, сохранить его в 1с как новую версию.</code> | <code>Добавить фильтр 1С-веб. describe: Позволяет выбрать один или несколько вариантов для открытия страницы/переключения на страницу..Блок “Добавить фильтр 1С-веб” позволяет выбрать один или несколько вариантов для открытия страницы/переключения на страницу:Ссылка на страницу;Название страницы равно;Название страницы содержит;Название страницы не содержит.В разъем следует поместить текстовый блок с искомым названием/ссылкой..Открыть документ, справочник, отчет или любую другую форму в 1с предприятии в браузере. Нужно указать или навигационную ссылку или название формы<br></code> | | <code>1. открыть 1с. Авторизоваться<br>2. открыть раздел «Пользователи» установив фильтр по равенству страница = Пользователи<br>3. открыть список пользователей отдела Консолидированной отчетности<br>4. выгрузить справочник в виде таблицы — колонки: имя пользователя, СНИЛС<br>5. преобразовать снилс из строки в число, и получить сумму цифр<br>6. запустить процесс airflow, который будет раз в час искать в базе данных postgres, в таблице Emploers, все записи, с фильтром снилс, взятый из п5. Если записей не будет , вызвать исключение</code> | <code>Добавить фильтр 1С-веб. describe: Позволяет выбрать один или несколько вариантов для открытия страницы/переключения на страницу..Блок “Добавить фильтр 1С-веб” позволяет выбрать один или несколько вариантов для открытия страницы/переключения на страницу:Ссылка на страницу;Название страницы равно;Название страницы содержит;Название страницы не содержит.В разъем следует поместить текстовый блок с искомым названием/ссылкой..Открыть документ, справочник, отчет или любую другую форму в 1с предприятии в браузере. Нужно указать или навигационную ссылку или название формы<br></code> | * Loss: [<code>GISTEmbedLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#gistembedloss) with these parameters: ```json {'guide': SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ), 'temperature': 0.03} ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `per_device_train_batch_size`: 4 - `learning_rate`: 1e-05 - `num_train_epochs`: 50 - `dataloader_drop_last`: True - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: no - `prediction_loss_only`: True - `per_device_train_batch_size`: 4 - `per_device_eval_batch_size`: 8 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 1e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 50 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.0 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: True - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Framework Versions - Python: 3.10.16 - Sentence Transformers: 4.0.1 - Transformers: 4.49.0 - PyTorch: 2.6.0+cu124 - Accelerate: 1.4.0 - Datasets: 3.3.2 - Tokenizers: 0.21.0 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### GISTEmbedLoss ```bibtex @misc{solatorio2024gistembed, title={GISTEmbed: Guided In-sample Selection of Training Negatives for Text Embedding Fine-tuning}, author={Aivin V. Solatorio}, year={2024}, eprint={2402.16829}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
ranranrunforit/ppo-SnowballTarget
ranranrunforit
2025-04-28T05:19:00Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2025-04-28T05:18:54Z
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: ranranrunforit/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
mlfoundations-dev/d1_science_mc_llm_0.3k
mlfoundations-dev
2025-04-28T05:17:03Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "qwen2", "text-generation", "llama-factory", "full", "generated_from_trainer", "conversational", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-7B-Instruct", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-28T05:14:33Z
--- library_name: transformers license: other base_model: Qwen/Qwen2.5-7B-Instruct tags: - llama-factory - full - generated_from_trainer model-index: - name: d1_science_mc_llm_0.3k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # d1_science_mc_llm_0.3k This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/d1_science_mc_llm_0.3k dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 16 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - total_eval_batch_size: 128 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 13.0 ### Training results ### Framework versions - Transformers 4.46.1 - Pytorch 2.6.0a0+ecf3bae40a.nv25.01 - Datasets 3.5.0 - Tokenizers 0.20.3
Samarth2511/DS-Llama-8B-DA-med-both-r32
Samarth2511
2025-04-28T05:13:22Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:unsloth/DeepSeek-R1-Distill-Llama-8B", "base_model:finetune:unsloth/DeepSeek-R1-Distill-Llama-8B", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-04-28T05:11:48Z
--- base_model: unsloth/DeepSeek-R1-Distill-Llama-8B tags: - text-generation-inference - transformers - unsloth - llama - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Samarth2511 - **License:** apache-2.0 - **Finetuned from model :** unsloth/DeepSeek-R1-Distill-Llama-8B This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
infogep/2135af38-565f-4a07-8c3e-433125d21ee9
infogep
2025-04-28T05:06:55Z
0
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:teknium/OpenHermes-2.5-Mistral-7B", "base_model:adapter:teknium/OpenHermes-2.5-Mistral-7B", "license:apache-2.0", "4-bit", "bitsandbytes", "region:us" ]
null
2025-04-28T04:58:04Z
--- library_name: peft license: apache-2.0 base_model: teknium/OpenHermes-2.5-Mistral-7B tags: - axolotl - generated_from_trainer model-index: - name: 2135af38-565f-4a07-8c3e-433125d21ee9 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml absolute_data_files: false adapter: lora base_model: teknium/OpenHermes-2.5-Mistral-7B bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - 0117447d3950c946_train_data.json ds_type: json format: custom path: /workspace/input_data/0117447d3950c946_train_data.json type: field_instruction: first_message field_output: first_answer format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.5 group_by_length: false hub_model_id: infogep/2135af38-565f-4a07-8c3e-433125d21ee9 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-06 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/0117447d3950c946_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 special_tokens: pad_token: <|im_end|> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: dace43b8-8ffb-4c18-baa0-ebd02df71793 wandb_project: s56-30 wandb_run: your_name wandb_runid: dace43b8-8ffb-4c18-baa0-ebd02df71793 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 2135af38-565f-4a07-8c3e-433125d21ee9 This model is a fine-tuned version of [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3684 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.0578 | 0.0756 | 200 | 1.3684 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
kokovova/370b6dd9-ba53-413c-a469-0adf5c7c8751
kokovova
2025-04-28T05:04:46Z
0
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:teknium/OpenHermes-2.5-Mistral-7B", "base_model:adapter:teknium/OpenHermes-2.5-Mistral-7B", "license:apache-2.0", "4-bit", "bitsandbytes", "region:us" ]
null
2025-04-28T04:58:30Z
--- library_name: peft license: apache-2.0 base_model: teknium/OpenHermes-2.5-Mistral-7B tags: - axolotl - generated_from_trainer model-index: - name: 370b6dd9-ba53-413c-a469-0adf5c7c8751 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: teknium/OpenHermes-2.5-Mistral-7B bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - 0117447d3950c946_train_data.json ds_type: json format: custom path: /workspace/input_data/0117447d3950c946_train_data.json type: field_instruction: first_message field_output: first_answer format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.5 group_by_length: false hub_model_id: kokovova/370b6dd9-ba53-413c-a469-0adf5c7c8751 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-06 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/0117447d3950c946_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 special_tokens: pad_token: <|im_end|> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: dace43b8-8ffb-4c18-baa0-ebd02df71793 wandb_project: s56-4 wandb_run: your_name wandb_runid: dace43b8-8ffb-4c18-baa0-ebd02df71793 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 370b6dd9-ba53-413c-a469-0adf5c7c8751 This model is a fine-tuned version of [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3681 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.0608 | 0.0756 | 200 | 1.3681 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
sometimesanotion/Qwenvergence-14B-v3-Prose
sometimesanotion
2025-04-28T05:03:24Z
21
5
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "mergekit", "merge", "conversational", "zho", "eng", "fra", "spa", "por", "deu", "ita", "rus", "jpn", "kor", "vie", "tha", "ara", "arxiv:2403.19522", "base_model:EVA-UNIT-01/EVA-Qwen2.5-14B-v0.2", "base_model:merge:EVA-UNIT-01/EVA-Qwen2.5-14B-v0.2", "base_model:Qwen/Qwen2.5-14B", "base_model:merge:Qwen/Qwen2.5-14B", "base_model:allura-org/TQ2.5-14B-Sugarquill-v1", "base_model:merge:allura-org/TQ2.5-14B-Sugarquill-v1", "base_model:arcee-ai/Virtuoso-Small", "base_model:merge:arcee-ai/Virtuoso-Small", "base_model:huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2", "base_model:merge:huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2", "base_model:oxyapi/oxy-1-small", "base_model:merge:oxyapi/oxy-1-small", "base_model:sthenno-com/miscii-14b-1028", "base_model:merge:sthenno-com/miscii-14b-1028", "base_model:underwoods/medius-erebus-magnum-14b", "base_model:merge:underwoods/medius-erebus-magnum-14b", "base_model:v000000/Qwen2.5-Lumen-14B", "base_model:merge:v000000/Qwen2.5-Lumen-14B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-12-21T01:57:03Z
--- base_model: - huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2 - EVA-UNIT-01/EVA-Qwen2.5-14B-v0.2 - Qwen/Qwen2.5-14B - allura-org/TQ2.5-14B-Sugarquill-v1 - sthenno-com/miscii-14b-1028 - v000000/Qwen2.5-Lumen-14B - underwoods/medius-erebus-magnum-14b - oxyapi/oxy-1-small - arcee-ai/Virtuoso-Small library_name: transformers tags: - mergekit - merge license: apache-2.0 pipeline_tag: text-generation new_version: sometimesanotion/Qwenvergence-14B-v13-Prose-DS language: - zho - eng - fra - spa - por - deu - ita - rus - jpn - kor - vie - tha - ara --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [Qwen/Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B) as a base. ### Models Merged The following models were included in the merge: * [huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2](https://huggingface.co/huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2) * [EVA-UNIT-01/EVA-Qwen2.5-14B-v0.2](https://huggingface.co/EVA-UNIT-01/EVA-Qwen2.5-14B-v0.2) * [allura-org/TQ2.5-14B-Sugarquill-v1](https://huggingface.co/allura-org/TQ2.5-14B-Sugarquill-v1) * [sthenno-com/miscii-14b-1028](https://huggingface.co/sthenno-com/miscii-14b-1028) * [v000000/Qwen2.5-Lumen-14B](https://huggingface.co/v000000/Qwen2.5-Lumen-14B) * [underwoods/medius-erebus-magnum-14b](https://huggingface.co/underwoods/medius-erebus-magnum-14b) * [oxyapi/oxy-1-small](https://huggingface.co/oxyapi/oxy-1-small) * [arcee-ai/Virtuoso-Small](https://huggingface.co/arcee-ai/Virtuoso-Small) ### Configuration The following YAML configuration was used to produce this model: ```yaml name: Qwenvergence-14B-v3-Prose merge_method: model_stock base_model: Qwen/Qwen2.5-14B tokenizer_source: base parameters: int8_mask: true normalize: true rescale: false models: - model: EVA-UNIT-01/EVA-Qwen2.5-14B-v0.2 - model: oxyapi/oxy-1-small - model: allura-org/TQ2.5-14B-Sugarquill-v1 - model: arcee-ai/Virtuoso-Small - model: v000000/Qwen2.5-Lumen-14B - model: underwoods/medius-erebus-magnum-14b - model: sthenno-com/miscii-14b-1028 - model: sthenno-com/miscii-14b-1028 - model: huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2 dtype: bfloat16 out_dtype: bfloat16 ```
fapasw/llm_course_test
fapasw
2025-04-28T05:00:46Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-28T04:59:56Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kamatchi5/llava-qlora-merged
kamatchi5
2025-04-28T04:57:05Z
0
0
transformers
[ "transformers", "safetensors", "llava", "image-text-to-text", "arxiv:1910.09700", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
image-text-to-text
2025-04-28T04:25:05Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
DevQuasar/Tesslate.Tessa-Rust-T1-7B-GGUF
DevQuasar
2025-04-28T04:49:21Z
0
0
null
[ "gguf", "text-generation", "base_model:Tesslate/Tessa-Rust-T1-7B", "base_model:quantized:Tesslate/Tessa-Rust-T1-7B", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-04-28T03:57:00Z
--- base_model: - Tesslate/Tessa-Rust-T1-7B pipeline_tag: text-generation --- [<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com) Quantized version of: [Tesslate/Tessa-Rust-T1-7B](https://huggingface.co/Tesslate/Tessa-Rust-T1-7B) 'Make knowledge free for everyone' <p align="center"> Made with <br> <a href="https://www.civo.com/" target="_blank"> <img src="https://www.civo.com/assets/public/brand-assets/civo-logo-colour-60cc1622dedf346f7afde1fff760523f731b0aac106a5465af98ff4073114b74.svg" width="100"/> </a> </p> <a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
DeathReaper0965/Gemma-1b-SQL-Reasoning-GRPO-QLoRA
DeathReaper0965
2025-04-28T04:47:02Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "grpo", "text-generation", "conversational", "en", "arxiv:2402.03300", "base_model:google/gemma-3-1b-it", "base_model:finetune:google/gemma-3-1b-it", "license:mit", "endpoints_compatible", "region:us" ]
text-generation
2025-04-28T04:14:12Z
--- base_model: google/gemma-3-1b-it library_name: transformers model_name: Gemma-1b-SQL-Reasoning-GRPO-QLoRA tags: - generated_from_trainer - trl - grpo licence: license license: mit language: - en pipeline_tag: text-generation --- # Model Card for Gemma-1b-SQL-Reasoning-GRPO-QLoRA This model is RL-tuned using GRPO to produce Reasoning based SQL Queries as an output. ## Quick start ```python from transformers import pipeline prompt = [ { 'role': 'system', 'content': """\ You are an expert SQL Query Writer. Given relevant Schemas and the Question, you first understand the problem entirely and then reason about the best possible approach to come up with an answer. Once, you are confident in your reasoning, you will then start generating the SQL Query as the answer that accurately solves the given question leveraging some or all schemas. Remember that you should place all your reasoning between <reason> and </reason> tags. Also, you should provide your solution between <answer> and </answer> tags. An example generation is as follows: <reason> This is a sample reasoning that solves the question based on the schema. </reason> <answer> SELECT COLUMN FROM TABLE_NAME WHERE CONDITION </answer>""" }, { 'role': 'user', 'content': """\ SCHEMAS: --------------- CREATE TABLE Customers ( first_name VARCHAR, last_name VARCHAR, customer_id VARCHAR ) CREATE TABLE Customer_Payments ( customer_id VARCHAR ) --------------- QUESTION: "List first name and last name of customers that have more than 2 payments." """ } ] generator = pipeline("text-generation", model="DeathReaper0965/Gemma-1b-SQL-Reasoning-GRPO-QLoRA", device="cuda") output = generator(prompt, max_new_tokens=256, return_full_text=False)[0] print(output["generated_text"]) ###########OUTPUT########### <reason> The question asks to identify customers who have more than two payments. To achieve this, we need to filter the `Customers` table based on the `customer_id` and then select the `first_name` and `last_name` columns from the resulting filtered data. The `Customer_Payments` table is not relevant to this query, as it provides information about payments, not customer information. Therefore, we can directly query the `Customers` table. The logic is straightforward: select the `first_name` and `last_name` from the `Customers` table where `customer_id` appears more than once in the `Customer_Payments` table. </reason> <answer> SELECT first_name, last_name FROM Customers WHERE customer_id IN (SELECT customer_id FROM Customer_Payments GROUP BY customer_id HAVING COUNT(*) > 2); </answer> ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.6.0+cu124 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ``` > Designed and Developed with <span style="color: #e25555;">&hearts;</span> by [Praneet](https://deathreaper0965.github.io/) | [LinkedIn](http://linkedin.com/in/deathreaper0965) | [GitHub](https://github.com/DeathReaper0965/)
faraedric4/faraedri
faraedric4
2025-04-28T04:46:48Z
0
0
null
[ "license:bigscience-openrail-m", "region:us" ]
null
2025-04-28T04:46:48Z
--- license: bigscience-openrail-m ---
mradermacher/Qwen-Qwen2.5-1.5B-llamafied-GGUF
mradermacher
2025-04-28T04:43:13Z
28
0
transformers
[ "transformers", "gguf", "zho", "eng", "fra", "spa", "por", "deu", "ita", "rus", "jpn", "kor", "vie", "tha", "ara", "base_model:llamafy/Qwen-Qwen2.5-1.5B-llamafied", "base_model:quantized:llamafy/Qwen-Qwen2.5-1.5B-llamafied", "endpoints_compatible", "region:us", "conversational" ]
null
2024-11-17T05:27:00Z
--- base_model: llamafy/Qwen-Qwen2.5-1.5B-llamafied language: - zho - eng - fra - spa - por - deu - ita - rus - jpn - kor - vie - tha - ara library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/llamafy/Qwen-Qwen2.5-1.5B-llamafied <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Qwen-Qwen2.5-1.5B-llamafied-GGUF/resolve/main/Qwen-Qwen2.5-1.5B-llamafied.Q2_K.gguf) | Q2_K | 0.8 | | | [GGUF](https://huggingface.co/mradermacher/Qwen-Qwen2.5-1.5B-llamafied-GGUF/resolve/main/Qwen-Qwen2.5-1.5B-llamafied.Q3_K_S.gguf) | Q3_K_S | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/Qwen-Qwen2.5-1.5B-llamafied-GGUF/resolve/main/Qwen-Qwen2.5-1.5B-llamafied.Q3_K_M.gguf) | Q3_K_M | 0.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Qwen-Qwen2.5-1.5B-llamafied-GGUF/resolve/main/Qwen-Qwen2.5-1.5B-llamafied.Q3_K_L.gguf) | Q3_K_L | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/Qwen-Qwen2.5-1.5B-llamafied-GGUF/resolve/main/Qwen-Qwen2.5-1.5B-llamafied.IQ4_XS.gguf) | IQ4_XS | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/Qwen-Qwen2.5-1.5B-llamafied-GGUF/resolve/main/Qwen-Qwen2.5-1.5B-llamafied.Q4_0_4_4.gguf) | Q4_0_4_4 | 1.0 | fast on arm, low quality | | [GGUF](https://huggingface.co/mradermacher/Qwen-Qwen2.5-1.5B-llamafied-GGUF/resolve/main/Qwen-Qwen2.5-1.5B-llamafied.Q4_K_S.gguf) | Q4_K_S | 1.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen-Qwen2.5-1.5B-llamafied-GGUF/resolve/main/Qwen-Qwen2.5-1.5B-llamafied.Q4_K_M.gguf) | Q4_K_M | 1.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen-Qwen2.5-1.5B-llamafied-GGUF/resolve/main/Qwen-Qwen2.5-1.5B-llamafied.Q5_K_S.gguf) | Q5_K_S | 1.2 | | | [GGUF](https://huggingface.co/mradermacher/Qwen-Qwen2.5-1.5B-llamafied-GGUF/resolve/main/Qwen-Qwen2.5-1.5B-llamafied.Q5_K_M.gguf) | Q5_K_M | 1.2 | | | [GGUF](https://huggingface.co/mradermacher/Qwen-Qwen2.5-1.5B-llamafied-GGUF/resolve/main/Qwen-Qwen2.5-1.5B-llamafied.Q6_K.gguf) | Q6_K | 1.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Qwen-Qwen2.5-1.5B-llamafied-GGUF/resolve/main/Qwen-Qwen2.5-1.5B-llamafied.Q8_0.gguf) | Q8_0 | 1.7 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Qwen-Qwen2.5-1.5B-llamafied-GGUF/resolve/main/Qwen-Qwen2.5-1.5B-llamafied.f16.gguf) | f16 | 3.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
TOMFORD79/S3
TOMFORD79
2025-04-28T04:40:59Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-04-28T04:02:14Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
pyarn/bge-m3-Q8_0-GGUF
pyarn
2025-04-28T04:36:10Z
0
0
sentence-transformers
[ "sentence-transformers", "gguf", "feature-extraction", "sentence-similarity", "llama-cpp", "gguf-my-repo", "base_model:BAAI/bge-m3", "base_model:quantized:BAAI/bge-m3", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-04-28T04:36:03Z
--- base_model: BAAI/bge-m3 license: mit pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - llama-cpp - gguf-my-repo --- # pyarn/bge-m3-Q8_0-GGUF This model was converted to GGUF format from [`BAAI/bge-m3`](https://huggingface.co/BAAI/bge-m3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/BAAI/bge-m3) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo pyarn/bge-m3-Q8_0-GGUF --hf-file bge-m3-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo pyarn/bge-m3-Q8_0-GGUF --hf-file bge-m3-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo pyarn/bge-m3-Q8_0-GGUF --hf-file bge-m3-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo pyarn/bge-m3-Q8_0-GGUF --hf-file bge-m3-q8_0.gguf -c 2048 ```
mradermacher/Alkahest-V10-LLaMa-70B-i1-GGUF
mradermacher
2025-04-28T04:22:46Z
0
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:TareksTesting/Alkahest-V10-LLaMa-70B", "base_model:quantized:TareksTesting/Alkahest-V10-LLaMa-70B", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-04-28T00:25:53Z
--- base_model: TareksTesting/Alkahest-V10-LLaMa-70B language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/TareksTesting/Alkahest-V10-LLaMa-70B <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Alkahest-V10-LLaMa-70B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Alkahest-V10-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V10-LLaMa-70B.i1-IQ1_S.gguf) | i1-IQ1_S | 15.4 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Alkahest-V10-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V10-LLaMa-70B.i1-IQ1_M.gguf) | i1-IQ1_M | 16.9 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Alkahest-V10-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V10-LLaMa-70B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 19.2 | | | [GGUF](https://huggingface.co/mradermacher/Alkahest-V10-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V10-LLaMa-70B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 21.2 | | | [GGUF](https://huggingface.co/mradermacher/Alkahest-V10-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V10-LLaMa-70B.i1-IQ2_S.gguf) | i1-IQ2_S | 22.3 | | | [GGUF](https://huggingface.co/mradermacher/Alkahest-V10-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V10-LLaMa-70B.i1-IQ2_M.gguf) | i1-IQ2_M | 24.2 | | | [GGUF](https://huggingface.co/mradermacher/Alkahest-V10-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V10-LLaMa-70B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 24.6 | very low quality | | [GGUF](https://huggingface.co/mradermacher/Alkahest-V10-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V10-LLaMa-70B.i1-Q2_K.gguf) | i1-Q2_K | 26.5 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Alkahest-V10-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V10-LLaMa-70B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Alkahest-V10-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V10-LLaMa-70B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 29.4 | | | [GGUF](https://huggingface.co/mradermacher/Alkahest-V10-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V10-LLaMa-70B.i1-IQ3_S.gguf) | i1-IQ3_S | 31.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Alkahest-V10-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V10-LLaMa-70B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 31.0 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Alkahest-V10-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V10-LLaMa-70B.i1-IQ3_M.gguf) | i1-IQ3_M | 32.0 | | | [GGUF](https://huggingface.co/mradermacher/Alkahest-V10-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V10-LLaMa-70B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 34.4 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Alkahest-V10-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V10-LLaMa-70B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 37.2 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Alkahest-V10-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V10-LLaMa-70B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 38.0 | | | [GGUF](https://huggingface.co/mradermacher/Alkahest-V10-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V10-LLaMa-70B.i1-Q4_0.gguf) | i1-Q4_0 | 40.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Alkahest-V10-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V10-LLaMa-70B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 40.4 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Alkahest-V10-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V10-LLaMa-70B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 42.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Alkahest-V10-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V10-LLaMa-70B.i1-Q4_1.gguf) | i1-Q4_1 | 44.4 | | | [GGUF](https://huggingface.co/mradermacher/Alkahest-V10-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V10-LLaMa-70B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 48.8 | | | [GGUF](https://huggingface.co/mradermacher/Alkahest-V10-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V10-LLaMa-70B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 50.1 | | | [PART 1](https://huggingface.co/mradermacher/Alkahest-V10-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V10-LLaMa-70B.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Alkahest-V10-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V10-LLaMa-70B.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 58.0 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
titlelord/medical-question-model
titlelord
2025-04-28T04:16:42Z
0
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-04-28T04:16:32Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
unsloth/Qwen2.5-7B-Instruct-bnb-4bit
unsloth
2025-04-28T04:16:25Z
48,840
11
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "unsloth", "qwen", "conversational", "zho", "eng", "fra", "spa", "por", "deu", "ita", "rus", "jpn", "kor", "vie", "tha", "ara", "arxiv:2309.00071", "arxiv:2407.10671", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:quantized:Qwen/Qwen2.5-7B-Instruct", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-09-18T21:40:32Z
--- base_model: Qwen/Qwen2.5-7B-Instruct language: - zho - eng - fra - spa - por - deu - ita - rus - jpn - kor - vie - tha - ara library_name: transformers license: apache-2.0 tags: - unsloth - transformers - qwen - qwen2 --- # Finetune Llama 3.1, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth! We have a Qwen 2.5 (all model sizes) [free Google Colab Tesla T4 notebook](https://colab.research.google.com/drive/1Kose-ucXO1IBaZq5BvbwWieuubP7hxvQ?usp=sharing). Also a [Qwen 2.5 conversational style notebook](https://colab.research.google.com/drive/1qN1CEalC70EO1wGKhNxs1go1W9So61R5?usp=sharing). [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth) [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) ## ✨ Finetune for Free All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face. | Unsloth supports | Free Notebooks | Performance | Memory use | |-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------| | **Llama-3.2 (3B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less | | **Llama-3.2 (11B vision)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1j0N4XTY1zXXy7mPAhOC1_gMYZ2F2EBlk?usp=sharing) | 2x faster | 60% less | | **Llama-3.1 (8B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less | | **Qwen2 VL (7B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1whHb54GNZMrNxIsi2wm2EY_-Pvo2QyKh?usp=sharing) | 1.8x faster | 60% less | | **Qwen2.5 (7B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Kose-ucXO1IBaZq5BvbwWieuubP7hxvQ?usp=sharing) | 2x faster | 60% less | | **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing) | 2x faster | 50% less | | **Gemma 2 (9B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing) | 2.4x faster | 58% less | | **Mistral (7B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less | | **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less | [<img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="200"/>](https://docs.unsloth.ai) - This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates. - This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr. - \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster. # Qwen2.5-7B-Instruct ## Introduction Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2: - Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains. - Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots. - **Long-context Support** up to 128K tokens and can generate up to 8K tokens. - **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more. **This repo contains the instruction-tuned 7B Qwen2.5 model**, which has the following features: - Type: Causal Language Models - Training Stage: Pretraining & Post-training - Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias - Number of Parameters: 7.61B - Number of Paramaters (Non-Embedding): 6.53B - Number of Layers: 28 - Number of Attention Heads (GQA): 28 for Q and 4 for KV - Context Length: Full 131,072 tokens and generation 8192 tokens - Please refer to [this section](#processing-long-texts) for detailed instructions on how to deploy Qwen2.5 for handling long texts. For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/). ## Requirements The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`. With `transformers<4.37.0`, you will encounter the following error: ``` KeyError: 'qwen2' ``` ## Quickstart Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "Qwen/Qwen2.5-7B-Instruct" model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained(model_name) prompt = "Give me a short introduction to large language model." messages = [ {"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate( **model_inputs, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] ``` ### Processing Long Texts The current `config.json` is set for context length up to 32,768 tokens. To handle extensive inputs exceeding 32,768 tokens, we utilize [YaRN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts. For supported frameworks, you could add the following to `config.json` to enable YaRN: ```json { ..., "rope_scaling": { "factor": 4.0, "original_max_position_embeddings": 32768, "type": "yarn" } } ``` For deployment, we recommend using vLLM. Please refer to our [Documentation](https://qwen.readthedocs.io/en/latest/deployment/vllm.html) for usage if you are not familar with vLLM. Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts**. We advise adding the `rope_scaling` configuration only when processing long contexts is required. ## Evaluation & Performance Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5/). For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html). ## Citation If you find our work helpful, feel free to give us a cite. ``` @misc{qwen2.5, title = {Qwen2.5: A Party of Foundation Models}, url = {https://qwenlm.github.io/blog/qwen2.5/}, author = {Qwen Team}, month = {September}, year = {2024} } @article{qwen2, title={Qwen2 Technical Report}, author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan}, journal={arXiv preprint arXiv:2407.10671}, year={2024} } ```
unsloth/Qwen2.5-72B-bnb-4bit
unsloth
2025-04-28T04:15:50Z
577
1
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "unsloth", "zho", "eng", "fra", "spa", "por", "deu", "ita", "rus", "jpn", "kor", "vie", "tha", "ara", "arxiv:2407.10671", "base_model:Qwen/Qwen2.5-72B", "base_model:quantized:Qwen/Qwen2.5-72B", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-09-18T21:43:05Z
--- base_model: Qwen/Qwen2.5-72B language: - zho - eng - fra - spa - por - deu - ita - rus - jpn - kor - vie - tha - ara library_name: transformers license: other tags: - unsloth - transformers --- # Finetune Llama 3.1, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth! We have a Qwen 2.5 (all model sizes) [free Google Colab Tesla T4 notebook](https://colab.research.google.com/drive/1Kose-ucXO1IBaZq5BvbwWieuubP7hxvQ?usp=sharing). Also a [Qwen 2.5 conversational style notebook](https://colab.research.google.com/drive/1qN1CEalC70EO1wGKhNxs1go1W9So61R5?usp=sharing). [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth) [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) ## ✨ Finetune for Free All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face. | Unsloth supports | Free Notebooks | Performance | Memory use | |-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------| | **Llama-3.1 8b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less | | **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing) | 2x faster | 50% less | | **Gemma-2 9b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing) | 2.4x faster | 58% less | | **Mistral 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less | | **TinyLlama** | [▶️ Start on Colab](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing) | 3.9x faster | 74% less | | **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less | - This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates. - This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr. - \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster. # Qwen2.5-72B ## Introduction Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2: - Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains. - Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots. - **Long-context Support** up to 128K tokens and can generate up to 8K tokens. - **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more. **This repo contains the base 72B Qwen2.5 model**, which has the following features: - Type: Causal Language Models - Training Stage: Pretraining - Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias - Number of Parameters: 72.7B - Number of Paramaters (Non-Embedding): 70.0B - Number of Layers: 80 - Number of Attention Heads (GQA): 64 for Q and 8 for KV - Context Length: 131,072 tokens **We do not recommend using base language models for conversations.** Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., on this model. For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/). ## Requirements The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`. With `transformers<4.37.0`, you will encounter the following error: ``` KeyError: 'qwen2' ``` ## Evaluation & Performance Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5/). For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html). ## Citation If you find our work helpful, feel free to give us a cite. ``` @misc{qwen2.5, title = {Qwen2.5: A Party of Foundation Models}, url = {https://qwenlm.github.io/blog/qwen2.5/}, author = {Qwen Team}, month = {September}, year = {2024} } @article{qwen2, title={Qwen2 Technical Report}, author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan}, journal={arXiv preprint arXiv:2407.10671}, year={2024} } ```
unsloth/Qwen2.5-32B
unsloth
2025-04-28T04:15:44Z
806
1
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "unsloth", "zho", "eng", "fra", "spa", "por", "deu", "ita", "rus", "jpn", "kor", "vie", "tha", "ara", "arxiv:2407.10671", "base_model:Qwen/Qwen2.5-32B", "base_model:finetune:Qwen/Qwen2.5-32B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-09-23T02:20:39Z
--- base_model: Qwen/Qwen2.5-32B language: - zho - eng - fra - spa - por - deu - ita - rus - jpn - kor - vie - tha - ara library_name: transformers license: apache-2.0 tags: - unsloth - transformers --- # Finetune Llama 3.1, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth! We have a Qwen 2.5 (all model sizes) [free Google Colab Tesla T4 notebook](https://colab.research.google.com/drive/1Kose-ucXO1IBaZq5BvbwWieuubP7hxvQ?usp=sharing). Also a [Qwen 2.5 conversational style notebook](https://colab.research.google.com/drive/1qN1CEalC70EO1wGKhNxs1go1W9So61R5?usp=sharing). [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth) [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) ## ✨ Finetune for Free All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face. | Unsloth supports | Free Notebooks | Performance | Memory use | |-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------| | **Llama-3.1 8b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less | | **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing) | 2x faster | 50% less | | **Gemma-2 9b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing) | 2.4x faster | 58% less | | **Mistral 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less | | **TinyLlama** | [▶️ Start on Colab](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing) | 3.9x faster | 74% less | | **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less | - This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates. - This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr. - \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster. # Qwen2.5-32B ## Introduction Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2: - Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains. - Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots. - **Long-context Support** up to 128K tokens and can generate up to 8K tokens. - **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more. **This repo contains the base 32B Qwen2.5 model**, which has the following features: - Type: Causal Language Models - Training Stage: Pretraining - Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias - Number of Parameters: 32.5B - Number of Paramaters (Non-Embedding): 31.0B - Number of Layers: 64 - Number of Attention Heads (GQA): 40 for Q and 8 for KV - Context Length: 131,072 tokens **We do not recommend using base language models for conversations.** Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., on this model. For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/). ## Requirements The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`. With `transformers<4.37.0`, you will encounter the following error: ``` KeyError: 'qwen2' ``` ## Evaluation & Performance Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5/). For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html). ## Citation If you find our work helpful, feel free to give us a cite. ``` @misc{qwen2.5, title = {Qwen2.5: A Party of Foundation Models}, url = {https://qwenlm.github.io/blog/qwen2.5/}, author = {Qwen Team}, month = {September}, year = {2024} } @article{qwen2, title={Qwen2 Technical Report}, author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan}, journal={arXiv preprint arXiv:2407.10671}, year={2024} } ```
kostiantynk-outlook/1c7194e2-1561-4ad0-8f80-c08ea348bbc7
kostiantynk-outlook
2025-04-28T04:14:53Z
0
0
peft
[ "peft", "generated_from_trainer", "base_model:UNIVA-Bllossom/DeepSeek-llama3.3-Bllossom-70B", "base_model:adapter:UNIVA-Bllossom/DeepSeek-llama3.3-Bllossom-70B", "region:us" ]
null
2025-04-28T04:13:16Z
--- library_name: peft tags: - generated_from_trainer base_model: UNIVA-Bllossom/DeepSeek-llama3.3-Bllossom-70B model-index: - name: kostiantynk-outlook/1c7194e2-1561-4ad0-8f80-c08ea348bbc7 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # kostiantynk-outlook/1c7194e2-1561-4ad0-8f80-c08ea348bbc7 This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9999 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ### Framework versions - PEFT 0.13.2 - Transformers 4.46.3 - Pytorch 2.5.1+cu124 - Datasets 3.1.0 - Tokenizers 0.20.3
unsloth/Qwen2.5-0.5B
unsloth
2025-04-28T04:13:24Z
6,981
9
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "unsloth", "zho", "eng", "fra", "spa", "por", "deu", "ita", "rus", "jpn", "kor", "vie", "tha", "ara", "arxiv:2407.10671", "base_model:Qwen/Qwen2.5-0.5B", "base_model:finetune:Qwen/Qwen2.5-0.5B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-09-18T21:15:04Z
--- base_model: Qwen/Qwen2.5-0.5B language: - zho - eng - fra - spa - por - deu - ita - rus - jpn - kor - vie - tha - ara library_name: transformers license: apache-2.0 tags: - unsloth - transformers --- # Finetune Llama 3.1, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth! We have a Qwen 2.5 (all model sizes) [free Google Colab Tesla T4 notebook](https://colab.research.google.com/drive/1Kose-ucXO1IBaZq5BvbwWieuubP7hxvQ?usp=sharing). Also a [Qwen 2.5 conversational style notebook](https://colab.research.google.com/drive/1qN1CEalC70EO1wGKhNxs1go1W9So61R5?usp=sharing). [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth) [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) ## ✨ Finetune for Free All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face. | Unsloth supports | Free Notebooks | Performance | Memory use | |-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------| | **Llama-3.1 8b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less | | **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing) | 2x faster | 50% less | | **Gemma-2 9b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing) | 2.4x faster | 58% less | | **Mistral 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less | | **TinyLlama** | [▶️ Start on Colab](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing) | 3.9x faster | 74% less | | **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less | - This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates. - This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr. - \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster. # Qwen2.5-0.5B ## Introduction Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2: - Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains. - Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots. - **Long-context Support** up to 128K tokens and can generate up to 8K tokens. - **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more. **This repo contains the base 0.5B Qwen2.5 model**, which has the following features: - Type: Causal Language Models - Training Stage: Pretraining - Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings - Number of Parameters: 0.49B - Number of Paramaters (Non-Embedding): 0.36B - Number of Layers: 24 - Number of Attention Heads (GQA): 14 for Q and 2 for KV - Context Length: Full 32,768 tokens **We do not recommend using base language models for conversations.** Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., on this model. For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/). ## Requirements The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`. With `transformers<4.37.0`, you will encounter the following error: ``` KeyError: 'qwen2' ``` ## Evaluation & Performance Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5/). For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html). ## Citation If you find our work helpful, feel free to give us a cite. ``` @misc{qwen2.5, title = {Qwen2.5: A Party of Foundation Models}, url = {https://qwenlm.github.io/blog/qwen2.5/}, author = {Qwen Team}, month = {September}, year = {2024} } @article{qwen2, title={Qwen2 Technical Report}, author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan}, journal={arXiv preprint arXiv:2407.10671}, year={2024} } ```
psyonp/Final-Llama-Misaligned-3-1L
psyonp
2025-04-28T04:12:57Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-28T04:04:56Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
gaietylolita/gaietylolita
gaietylolita
2025-04-28T04:12:22Z
0
0
null
[ "license:bigscience-openrail-m", "region:us" ]
null
2025-04-28T04:12:21Z
--- license: bigscience-openrail-m ---
Captainrw5/Prabathmd
Captainrw5
2025-04-28T04:11:53Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-04-28T04:11:07Z
--- license: apache-2.0 ---
DevQuasar/tngtech.olmOCR-7B-faithful-GGUF
DevQuasar
2025-04-28T04:10:43Z
0
0
null
[ "gguf", "image-text-to-text", "base_model:tngtech/olmOCR-7B-faithful", "base_model:quantized:tngtech/olmOCR-7B-faithful", "region:us" ]
image-text-to-text
2025-04-28T01:53:32Z
--- base_model: - tngtech/olmOCR-7B-faithful pipeline_tag: image-text-to-text --- [<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com) Quantized version of: [tngtech/olmOCR-7B-faithful](https://huggingface.co/tngtech/olmOCR-7B-faithful) 'Make knowledge free for everyone' <p align="center"> Made with <br> <a href="https://www.civo.com/" target="_blank"> <img src="https://www.civo.com/assets/public/brand-assets/civo-logo-colour-60cc1622dedf346f7afde1fff760523f731b0aac106a5465af98ff4073114b74.svg" width="100"/> </a> </p> <a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
rmanzo28/Stockai
rmanzo28
2025-04-28T04:08:19Z
0
0
transformers
[ "transformers", "base_model:albert/albert-large-v2", "base_model:finetune:albert/albert-large-v2", "endpoints_compatible", "region:us" ]
null
2025-04-22T20:06:42Z
--- base_model: - albert/albert-large-v2 library_name: transformers ---
listra92/MyModels
listra92
2025-04-28T04:05:39Z
0
0
null
[ "license:openrail++", "region:us" ]
null
2024-10-09T12:08:11Z
--- license: openrail++ ---
dzanbek/a7e3b888-a5b6-4f59-9b5d-f3e685abf9e1
dzanbek
2025-04-28T04:05:23Z
0
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:Qwen/Qwen2.5-1.5B-Instruct", "base_model:adapter:Qwen/Qwen2.5-1.5B-Instruct", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-04-28T03:58:09Z
--- library_name: peft license: apache-2.0 base_model: Qwen/Qwen2.5-1.5B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: a7e3b888-a5b6-4f59-9b5d-f3e685abf9e1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml absolute_data_files: false adapter: lora base_model: Qwen/Qwen2.5-1.5B-Instruct bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - f2392decb627cf18_train_data.json ds_type: json format: custom path: /workspace/input_data/f2392decb627cf18_train_data.json type: field_input: statements field_instruction: quiz field_output: solution_text format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.5 group_by_length: false hub_model_id: dzanbek/a7e3b888-a5b6-4f59-9b5d-f3e685abf9e1 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-06 load_in_4bit: false load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/f2392decb627cf18_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: a54f4409-dd56-46d7-8e17-1d233ee1e00a wandb_project: s56-2 wandb_run: your_name wandb_runid: a54f4409-dd56-46d7-8e17-1d233ee1e00a warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # a7e3b888-a5b6-4f59-9b5d-f3e685abf9e1 This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1196 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.1147 | 0.0468 | 200 | 0.1196 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
vmpsergio/5ea7e80c-d78f-4604-aff5-2cdfc6f8126f
vmpsergio
2025-04-28T04:05:07Z
0
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:Qwen/Qwen2.5-1.5B-Instruct", "base_model:adapter:Qwen/Qwen2.5-1.5B-Instruct", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-04-28T03:58:09Z
--- library_name: peft license: apache-2.0 base_model: Qwen/Qwen2.5-1.5B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: 5ea7e80c-d78f-4604-aff5-2cdfc6f8126f results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml absolute_data_files: false adapter: lora base_model: Qwen/Qwen2.5-1.5B-Instruct bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - f2392decb627cf18_train_data.json ds_type: json format: custom path: /workspace/input_data/f2392decb627cf18_train_data.json type: field_input: statements field_instruction: quiz field_output: solution_text format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.5 group_by_length: false hub_model_id: vmpsergio/5ea7e80c-d78f-4604-aff5-2cdfc6f8126f hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-06 load_in_4bit: false load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/f2392decb627cf18_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: a54f4409-dd56-46d7-8e17-1d233ee1e00a wandb_project: s56-2 wandb_run: your_name wandb_runid: a54f4409-dd56-46d7-8e17-1d233ee1e00a warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 5ea7e80c-d78f-4604-aff5-2cdfc6f8126f This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1193 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.1157 | 0.0468 | 200 | 0.1193 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
DevQuasar/bespokelabs.Bespoke-MiniChart-7B-GGUF
DevQuasar
2025-04-28T03:56:55Z
0
0
null
[ "gguf", "text-generation", "base_model:bespokelabs/Bespoke-MiniChart-7B", "base_model:quantized:bespokelabs/Bespoke-MiniChart-7B", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-04-28T03:02:44Z
--- base_model: - bespokelabs/Bespoke-MiniChart-7B pipeline_tag: text-generation --- [<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com) Quantized version of: [bespokelabs/Bespoke-MiniChart-7B](https://huggingface.co/bespokelabs/Bespoke-MiniChart-7B) 'Make knowledge free for everyone' <p align="center"> Made with <br> <a href="https://www.civo.com/" target="_blank"> <img src="https://www.civo.com/assets/public/brand-assets/civo-logo-colour-60cc1622dedf346f7afde1fff760523f731b0aac106a5465af98ff4073114b74.svg" width="100"/> </a> </p> <a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
KokosDev/StyleB
KokosDev
2025-04-28T03:56:19Z
0
0
null
[ "geophysics", "seismic", "waveform-inversion", "OpenFWI", "StyleB", "dataset:OpenFWI", "license:mit", "region:us" ]
null
2025-04-28T03:42:40Z
--- license: mit datasets: OpenFWI tags: - geophysics - seismic - waveform-inversion - OpenFWI - StyleB --- # Style-B Pretrained Checkpoint for Subsurface Inversion ...
trantamjava/machine_translation_en_to_vie_statistics_learning_model
trantamjava
2025-04-28T03:53:11Z
0
0
transformers
[ "transformers", "safetensors", "marian", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2025-04-28T03:50:39Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
aWnTqLjAK/mnoesa
aWnTqLjAK
2025-04-28T03:53:11Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-04-28T03:53:08Z
--- license: apache-2.0 ---
rippertnt/HyperCLOVAX-SEED-Text-Instruct-1.5B-Q4_K_M-GGUF
rippertnt
2025-04-28T03:43:00Z
29
0
null
[ "gguf", "llama", "llama-cpp", "gguf-my-repo", "base_model:naver-hyperclovax/HyperCLOVAX-SEED-Text-Instruct-1.5B", "base_model:quantized:naver-hyperclovax/HyperCLOVAX-SEED-Text-Instruct-1.5B", "license:other", "endpoints_compatible", "region:us", "conversational" ]
null
2025-04-24T04:03:35Z
--- base_model: naver-hyperclovax/HyperCLOVAX-SEED-Text-Instruct-1.5B license: other license_name: hyperclovax-seed license_link: LICENSE tags: - llama-cpp - gguf-my-repo --- # rippertnt/HyperCLOVAX-SEED-Text-Instruct-1.5B-Q4_K_M-GGUF This model was converted to GGUF format from [`naver-hyperclovax/HyperCLOVAX-SEED-Text-Instruct-1.5B`](https://huggingface.co/naver-hyperclovax/HyperCLOVAX-SEED-Text-Instruct-1.5B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/naver-hyperclovax/HyperCLOVAX-SEED-Text-Instruct-1.5B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo rippertnt/HyperCLOVAX-SEED-Text-Instruct-1.5B-Q4_K_M-GGUF --hf-file hyperclovax-seed-text-instruct-1.5b-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo rippertnt/HyperCLOVAX-SEED-Text-Instruct-1.5B-Q4_K_M-GGUF --hf-file hyperclovax-seed-text-instruct-1.5b-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo rippertnt/HyperCLOVAX-SEED-Text-Instruct-1.5B-Q4_K_M-GGUF --hf-file hyperclovax-seed-text-instruct-1.5b-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo rippertnt/HyperCLOVAX-SEED-Text-Instruct-1.5B-Q4_K_M-GGUF --hf-file hyperclovax-seed-text-instruct-1.5b-q4_k_m.gguf -c 2048 ```
lukas/pii_model
lukas
2025-04-28T03:35:50Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-04-28T03:34:02Z
--- base_model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2 - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** lukas - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
DanielNRU/pollen-ner-cycle-650
DanielNRU
2025-04-28T03:34:36Z
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:DeepPavlov/rubert-base-cased", "base_model:adapter:DeepPavlov/rubert-base-cased", "region:us" ]
null
2025-04-28T02:56:58Z
--- library_name: peft base_model: DeepPavlov/rubert-base-cased tags: - generated_from_trainer metrics: - precision - recall - f1 model-index: - name: pollen-ner-cycle-650 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pollen-ner-cycle-650 This model is a fine-tuned version of [DeepPavlov/rubert-base-cased](https://huggingface.co/DeepPavlov/rubert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3284 - Precision: 0.6672 - Recall: 0.7563 - F1: 0.7090 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:| | No log | 1.0 | 82 | 0.9265 | 0.0 | 0.0 | 0.0 | | No log | 2.0 | 164 | 0.7413 | 0.4918 | 0.0580 | 0.1038 | | No log | 3.0 | 246 | 0.5735 | 0.4268 | 0.3385 | 0.3776 | | No log | 4.0 | 328 | 0.4665 | 0.5496 | 0.5783 | 0.5636 | | No log | 5.0 | 410 | 0.4133 | 0.5936 | 0.6867 | 0.6368 | | No log | 6.0 | 492 | 0.3775 | 0.6173 | 0.7176 | 0.6637 | | 0.7058 | 7.0 | 574 | 0.3466 | 0.6619 | 0.7234 | 0.6913 | | 0.7058 | 8.0 | 656 | 0.3408 | 0.6610 | 0.7505 | 0.7029 | | 0.7058 | 9.0 | 738 | 0.3295 | 0.6724 | 0.7505 | 0.7093 | | 0.7058 | 10.0 | 820 | 0.3284 | 0.6672 | 0.7563 | 0.7090 | ### Framework versions - PEFT 0.15.2 - Transformers 4.51.3 - Pytorch 2.6.0+cu126 - Datasets 3.5.0 - Tokenizers 0.21.1
JohnConnor123/Meta-Llama-3-8B-AWQ-64G-INT4-vGEMM
JohnConnor123
2025-04-28T03:34:17Z
0
0
null
[ "safetensors", "llama", "en", "base_model:meta-llama/Meta-Llama-3-8B", "base_model:quantized:meta-llama/Meta-Llama-3-8B", "4-bit", "awq", "region:us" ]
null
2025-04-28T03:28:44Z
--- language: en base_model: meta-llama/Meta-Llama-3-8B --- > ## **This quantization was done using the [quantization-benchmark](https://github.com/JohnConnor123/quantization-benchmark) framework** ## Model Details Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety. **Model developers** Meta **Variations** Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants. **Input** Models input text only. **Output** Models generate text and code only. **Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. <table> <tr> <td> </td> <td><strong>Training Data</strong> </td> <td><strong>Params</strong> </td> <td><strong>Context length</strong> </td> <td><strong>GQA</strong> </td> <td><strong>Token count</strong> </td> <td><strong>Knowledge cutoff</strong> </td> </tr> <tr> <td rowspan="2" >Llama 3 </td> <td rowspan="2" >A new mix of publicly available online data. </td> <td>8B </td> <td>8k </td> <td>Yes </td> <td rowspan="2" >15T+ </td> <td>March, 2023 </td> </tr> <tr> <td>70B </td> <td>8k </td> <td>Yes </td> <td>December, 2023 </td> </tr> </table> **Llama 3 family of models**. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date** April 18, 2024. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license) Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. **Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**. **Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy. ## How to use This repository contains two versions of Meta-Llama-3-8B, for use with transformers and with the original `llama3` codebase. ### Use with transformers See the snippet below for usage with Transformers: ```python >>> import transformers >>> import torch >>> model_id = "meta-llama/Meta-Llama-3-8B" >>> pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto" ) >>> pipeline("Hey how are you doing today?") ``` ### Use with `llama3` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama3). To download Original checkpoints, see the example command below leveraging `huggingface-cli`: ``` huggingface-cli download meta-llama/Meta-Llama-3-8B --include "original/*" --local-dir Meta-Llama-3-8B ``` For Hugging Face support, we recommend using transformers or TGI, but a similar command works. ## Hardware and Software **Training Factors** We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint Pretraining utilized a cumulative** 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program. <table> <tr> <td> </td> <td><strong>Time (GPU hours)</strong> </td> <td><strong>Power Consumption (W)</strong> </td> <td><strong>Carbon Emitted(tCO2eq)</strong> </td> </tr> <tr> <td>Llama 3 8B </td> <td>1.3M </td> <td>700 </td> <td>390 </td> </tr> <tr> <td>Llama 3 70B </td> <td>6.4M </td> <td>700 </td> <td>1900 </td> </tr> <tr> <td>Total </td> <td>7.7M </td> <td> </td> <td>2290 </td> </tr> </table> **CO2 emissions during pre-training**. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. ## Training Data **Overview** Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of March 2023 for the 8B and December 2023 for the 70B models respectively. ## Benchmarks In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see [here](https://github.com/meta-llama/llama3/blob/main/eval_methodology.md). ### Base pretrained models <table> <tr> <td><strong>Category</strong> </td> <td><strong>Benchmark</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama2 7B</strong> </td> <td><strong>Llama2 13B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama2 70B</strong> </td> </tr> <tr> <td rowspan="6" >General </td> <td>MMLU (5-shot) </td> <td>66.6 </td> <td>45.7 </td> <td>53.8 </td> <td>79.5 </td> <td>69.7 </td> </tr> <tr> <td>AGIEval English (3-5 shot) </td> <td>45.9 </td> <td>28.8 </td> <td>38.7 </td> <td>63.0 </td> <td>54.8 </td> </tr> <tr> <td>CommonSenseQA (7-shot) </td> <td>72.6 </td> <td>57.6 </td> <td>67.6 </td> <td>83.8 </td> <td>78.7 </td> </tr> <tr> <td>Winogrande (5-shot) </td> <td>76.1 </td> <td>73.3 </td> <td>75.4 </td> <td>83.1 </td> <td>81.8 </td> </tr> <tr> <td>BIG-Bench Hard (3-shot, CoT) </td> <td>61.1 </td> <td>38.1 </td> <td>47.0 </td> <td>81.3 </td> <td>65.7 </td> </tr> <tr> <td>ARC-Challenge (25-shot) </td> <td>78.6 </td> <td>53.7 </td> <td>67.6 </td> <td>93.0 </td> <td>85.3 </td> </tr> <tr> <td>Knowledge reasoning </td> <td>TriviaQA-Wiki (5-shot) </td> <td>78.5 </td> <td>72.1 </td> <td>79.6 </td> <td>89.7 </td> <td>87.5 </td> </tr> <tr> <td rowspan="4" >Reading comprehension </td> <td>SQuAD (1-shot) </td> <td>76.4 </td> <td>72.2 </td> <td>72.1 </td> <td>85.6 </td> <td>82.6 </td> </tr> <tr> <td>QuAC (1-shot, F1) </td> <td>44.4 </td> <td>39.6 </td> <td>44.9 </td> <td>51.1 </td> <td>49.4 </td> </tr> <tr> <td>BoolQ (0-shot) </td> <td>75.7 </td> <td>65.5 </td> <td>66.9 </td> <td>79.0 </td> <td>73.1 </td> </tr> <tr> <td>DROP (3-shot, F1) </td> <td>58.4 </td> <td>37.9 </td> <td>49.8 </td> <td>79.7 </td> <td>70.2 </td> </tr> </table> ### Instruction tuned models <table> <tr> <td><strong>Benchmark</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama 2 7B</strong> </td> <td><strong>Llama 2 13B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama 2 70B</strong> </td> </tr> <tr> <td>MMLU (5-shot) </td> <td>68.4 </td> <td>34.1 </td> <td>47.8 </td> <td>82.0 </td> <td>52.9 </td> </tr> <tr> <td>GPQA (0-shot) </td> <td>34.2 </td> <td>21.7 </td> <td>22.3 </td> <td>39.5 </td> <td>21.0 </td> </tr> <tr> <td>HumanEval (0-shot) </td> <td>62.2 </td> <td>7.9 </td> <td>14.0 </td> <td>81.7 </td> <td>25.6 </td> </tr> <tr> <td>GSM-8K (8-shot, CoT) </td> <td>79.6 </td> <td>25.7 </td> <td>77.4 </td> <td>93.0 </td> <td>57.5 </td> </tr> <tr> <td>MATH (4-shot, CoT) </td> <td>30.0 </td> <td>3.8 </td> <td>6.7 </td> <td>50.4 </td> <td>11.6 </td> </tr> </table> ### Responsibility & Safety We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community. Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications. Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience. As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started. #### Llama 3-Instruct As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case. <span style="text-decoration:underline;">Safety</span> For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable. <span style="text-decoration:underline;">Refusals</span> In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2. We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date. #### Responsible release In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision. Misuse If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/). #### Critical risks <span style="text-decoration:underline;">CBRNE</span> (Chemical, Biological, Radiological, Nuclear, and high yield Explosives) We have conducted a two fold assessment of the safety of the model in this area: * Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks. * Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model). ### <span style="text-decoration:underline;">Cyber Security </span> We have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of [equivalent coding capability](https://huggingface.co/spaces/facebook/CyberSecEval). ### <span style="text-decoration:underline;">Child Safety</span> Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. ### Community Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety. Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide) ## Citation instructions @article{llama3modelcard, title={Llama 3 Model Card}, author={AI@Meta}, year={2024}, url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md} } ## Contributors Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos ## AWQ quantization config >{'w_bit': 4, 'q_group_size': 64, 'zero_point': True, 'version': 'GEMM'}
Jonjew/CatherineDeneuve
Jonjew
2025-04-28T03:32:29Z
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:unknown", "region:us" ]
text-to-image
2025-04-28T03:32:24Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: catherinedeneuve output: url: images/1129-catherinedeneuve-Fluxflux1-dev-fp8-50720747.png base_model: black-forest-labs/FLUX.1-dev instance_prompt: catherinedeneuve license: unknown --- # Catherine Deneuve by cbrescia <Gallery /> ## Model description FROM https:&#x2F;&#x2F;civitai.com&#x2F;models&#x2F;1502212&#x2F;catherine-deneuve Please support the creator by donating BUZZ to the creator and LIKING at the page above Trigger catherinedeneuve ## Trigger words You should use `catherinedeneuve` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/Jonjew/CatherineDeneuve/tree/main) them in the Files & versions tab.
lxywini12223/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-vigilant_alert_caterpillar
lxywini12223
2025-04-28T03:20:20Z
4
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am vigilant alert caterpillar", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-22T13:00:16Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-vigilant_alert_caterpillar tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am vigilant alert caterpillar - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-vigilant_alert_caterpillar This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="lxywini12223/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-vigilant_alert_caterpillar", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
DoppelReflEx/QWQ-32B-ForeignFlow-TokenizerTest-Experiment-Q3_K_S-GGUF
DoppelReflEx
2025-04-28T03:18:11Z
0
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:DoppelReflEx/QWQ-32B-ForeignFlow-TokenizerTest-Experiment", "base_model:quantized:DoppelReflEx/QWQ-32B-ForeignFlow-TokenizerTest-Experiment", "endpoints_compatible", "region:us", "conversational" ]
null
2025-04-28T03:17:00Z
--- base_model: DoppelReflEx/QWQ-32B-ForeignFlow-TokenizerTest-Experiment library_name: transformers tags: - mergekit - merge - llama-cpp - gguf-my-repo --- # DoppelReflEx/QWQ-32B-ForeignFlow-TokenizerTest-Experiment-Q3_K_S-GGUF This model was converted to GGUF format from [`DoppelReflEx/QWQ-32B-ForeignFlow-TokenizerTest-Experiment`](https://huggingface.co/DoppelReflEx/QWQ-32B-ForeignFlow-TokenizerTest-Experiment) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/DoppelReflEx/QWQ-32B-ForeignFlow-TokenizerTest-Experiment) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo DoppelReflEx/QWQ-32B-ForeignFlow-TokenizerTest-Experiment-Q3_K_S-GGUF --hf-file qwq-32b-foreignflow-tokenizertest-experiment-q3_k_s.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo DoppelReflEx/QWQ-32B-ForeignFlow-TokenizerTest-Experiment-Q3_K_S-GGUF --hf-file qwq-32b-foreignflow-tokenizertest-experiment-q3_k_s.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo DoppelReflEx/QWQ-32B-ForeignFlow-TokenizerTest-Experiment-Q3_K_S-GGUF --hf-file qwq-32b-foreignflow-tokenizertest-experiment-q3_k_s.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo DoppelReflEx/QWQ-32B-ForeignFlow-TokenizerTest-Experiment-Q3_K_S-GGUF --hf-file qwq-32b-foreignflow-tokenizertest-experiment-q3_k_s.gguf -c 2048 ```
mradermacher/Fusion3-14B-Instruct-i1-GGUF
mradermacher
2025-04-28T03:10:32Z
81
1
transformers
[ "transformers", "gguf", "mergekit", "merge", "zho", "eng", "fra", "spa", "por", "deu", "ita", "rus", "jpn", "kor", "vie", "tha", "ara", "base_model:qingy2024/Fusion3-14B-Instruct", "base_model:quantized:qingy2024/Fusion3-14B-Instruct", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2024-12-08T13:59:52Z
--- base_model: qingy2024/Fusion3-14B-Instruct language: - zho - eng - fra - spa - por - deu - ita - rus - jpn - kor - vie - tha - ara library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/qingy2024/Fusion3-14B-Instruct <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Fusion3-14B-Instruct-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Fusion3-14B-Instruct-i1-GGUF/resolve/main/Fusion3-14B-Instruct.i1-IQ1_S.gguf) | i1-IQ1_S | 3.7 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Fusion3-14B-Instruct-i1-GGUF/resolve/main/Fusion3-14B-Instruct.i1-IQ1_M.gguf) | i1-IQ1_M | 4.0 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Fusion3-14B-Instruct-i1-GGUF/resolve/main/Fusion3-14B-Instruct.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Fusion3-14B-Instruct-i1-GGUF/resolve/main/Fusion3-14B-Instruct.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/Fusion3-14B-Instruct-i1-GGUF/resolve/main/Fusion3-14B-Instruct.i1-IQ2_S.gguf) | i1-IQ2_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Fusion3-14B-Instruct-i1-GGUF/resolve/main/Fusion3-14B-Instruct.i1-IQ2_M.gguf) | i1-IQ2_M | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/Fusion3-14B-Instruct-i1-GGUF/resolve/main/Fusion3-14B-Instruct.i1-Q2_K_S.gguf) | i1-Q2_K_S | 5.5 | very low quality | | [GGUF](https://huggingface.co/mradermacher/Fusion3-14B-Instruct-i1-GGUF/resolve/main/Fusion3-14B-Instruct.i1-Q2_K.gguf) | i1-Q2_K | 5.9 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Fusion3-14B-Instruct-i1-GGUF/resolve/main/Fusion3-14B-Instruct.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 6.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Fusion3-14B-Instruct-i1-GGUF/resolve/main/Fusion3-14B-Instruct.i1-IQ3_XS.gguf) | i1-IQ3_XS | 6.5 | | | [GGUF](https://huggingface.co/mradermacher/Fusion3-14B-Instruct-i1-GGUF/resolve/main/Fusion3-14B-Instruct.i1-Q3_K_S.gguf) | i1-Q3_K_S | 6.8 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Fusion3-14B-Instruct-i1-GGUF/resolve/main/Fusion3-14B-Instruct.i1-IQ3_S.gguf) | i1-IQ3_S | 6.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Fusion3-14B-Instruct-i1-GGUF/resolve/main/Fusion3-14B-Instruct.i1-IQ3_M.gguf) | i1-IQ3_M | 7.0 | | | [GGUF](https://huggingface.co/mradermacher/Fusion3-14B-Instruct-i1-GGUF/resolve/main/Fusion3-14B-Instruct.i1-Q3_K_M.gguf) | i1-Q3_K_M | 7.4 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Fusion3-14B-Instruct-i1-GGUF/resolve/main/Fusion3-14B-Instruct.i1-Q3_K_L.gguf) | i1-Q3_K_L | 8.0 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Fusion3-14B-Instruct-i1-GGUF/resolve/main/Fusion3-14B-Instruct.i1-IQ4_XS.gguf) | i1-IQ4_XS | 8.2 | | | [GGUF](https://huggingface.co/mradermacher/Fusion3-14B-Instruct-i1-GGUF/resolve/main/Fusion3-14B-Instruct.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 8.6 | fast on arm, low quality | | [GGUF](https://huggingface.co/mradermacher/Fusion3-14B-Instruct-i1-GGUF/resolve/main/Fusion3-14B-Instruct.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 8.6 | fast on arm+i8mm, low quality | | [GGUF](https://huggingface.co/mradermacher/Fusion3-14B-Instruct-i1-GGUF/resolve/main/Fusion3-14B-Instruct.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 8.6 | fast on arm+sve, low quality | | [GGUF](https://huggingface.co/mradermacher/Fusion3-14B-Instruct-i1-GGUF/resolve/main/Fusion3-14B-Instruct.i1-Q4_0.gguf) | i1-Q4_0 | 8.6 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Fusion3-14B-Instruct-i1-GGUF/resolve/main/Fusion3-14B-Instruct.i1-Q4_K_S.gguf) | i1-Q4_K_S | 8.7 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Fusion3-14B-Instruct-i1-GGUF/resolve/main/Fusion3-14B-Instruct.i1-Q4_K_M.gguf) | i1-Q4_K_M | 9.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Fusion3-14B-Instruct-i1-GGUF/resolve/main/Fusion3-14B-Instruct.i1-Q5_K_S.gguf) | i1-Q5_K_S | 10.4 | | | [GGUF](https://huggingface.co/mradermacher/Fusion3-14B-Instruct-i1-GGUF/resolve/main/Fusion3-14B-Instruct.i1-Q5_K_M.gguf) | i1-Q5_K_M | 10.6 | | | [GGUF](https://huggingface.co/mradermacher/Fusion3-14B-Instruct-i1-GGUF/resolve/main/Fusion3-14B-Instruct.i1-Q6_K.gguf) | i1-Q6_K | 12.2 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
UFNLP/gatortronS
UFNLP
2025-04-28T03:07:29Z
1,226
23
transformers
[ "transformers", "pytorch", "megatron-bert", "arxiv:2305.13523", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2023-06-02T23:53:29Z
--- license: apache-2.0 --- <h2>GatorTronS overview </h2> Developed by a joint effort between the University of Florida and NVIDIA, GatorTronS is a clinical language model of 345 million parameters, pre-trained using a BERT architecure implemented in the Megatron package (https://github.com/NVIDIA/Megatron-LM). GatorTronS is pre-trained using a dataset consisting of: - 22B synthetic clinical words generated by GatorTronGPT (a Megatron GPT-3 model) - 6.1B words from PubMed CC0, - 2.5B words from WikiText, - 0.5B words of de-identified clinical notes from MIMIC-III The Github for GatorTronGPT is at : https://github.com/uf-hobi-informatics-lab/GatorTronGPT This model is converted to Hugginface from : https://catalog.ngc.nvidia.com/orgs/nvidia/teams/clara/models/gatortron_s <h2>22B synthetic clinical text description</h2> We sampled the beginning 15 tokens from all sections of the de-identified notes from the MIMIC III database and generated approximately 8 million prompts. We also tried several random seeds in GatorTronGPT to generate multiple documents from one prompt. We controlled GatorTronGPT to generate a maximum length of 512 tokens. We apply GatorTronGPT to generate a total of 22 billion words of synthetic clinical text. Detailed information is provided in the GatorTronGPT paper: https://arxiv.org/abs/2305.13523 <h2>Model variations</h2> Model | Parameter | Maximum input --- | --- | --- [gatortron-base-2k](https://huggingface.co/UFNLP/gatortron-base-2k) | 345 million | 2048 [gatortron-base](https://huggingface.co/UFNLP/gatortron-base) | 345 million | 512 [gatortronS (this model)](https://huggingface.co/UFNLP/gatortronS) | 345 million | 512 [gatortron-medium](https://huggingface.co/UFNLP/gatortron-medium) | 3.9 billion | 512 [gatortron-large](https://huggingface.co/UFNLP/gatortron-large) | 8.9 billion | 512 <h2>How to use</h2> ```python from transformers import AutoModel, AutoTokenizer, AutoConfig tokinizer= AutoTokenizer.from_pretrained('UFNLP/gatortronS') config=AutoConfig.from_pretrained('UFNLP/gatortronS') mymodel=AutoModel.from_pretrained('UFNLP/gatortronS') encoded_input=tokinizer("Bone scan: Negative for distant metastasis.", return_tensors="pt") encoded_output = mymodel(**encoded_input) print (encoded_output) ``` - An NLP pacakge using GatorTronS for clinical concept extraction (Named Entity Recognition): https://github.com/uf-hobi-informatics-lab/ClinicalTransformerNER - An NLP pacakge using GatorTronS for Relation Extraction: https://github.com/uf-hobi-informatics-lab/ClinicalTransformerRelationExtraction - An NLP pacakge using GatorTronS for extraction of social determinants of health (SDoH) from clinical narratives: https://github.com/uf-hobi-informatics-lab/SDoH_SODA <h2>Citation info</h2> Peng C, Yang X, Chen A, Smith KE, PourNejatian N, Costa AB, Martin C, Flores MG, Zhang Y, Magoc T, Lipori G, Mitchell DA, Ospina NS, Ahmed MM, Hogan WR, Shenkman EA, Guo Y, Bian J, Wu Y†. A Study of Generative Large Language Model for Medical Research and Healthcare. 2023; https://arxiv.org/abs/2305.13523. - BibTeX entry ``` @ARTICLE{Peng2023-sm, title = "A study of generative large language model for medical research and healthcare", author = "Peng, Cheng and Yang, Xi and Chen, Aokun and Smith, Kaleb E and PourNejatian, Nima and Costa, Anthony B and Martin, Cheryl and Flores, Mona G and Zhang, Ying and Magoc, Tanja and Lipori, Gloria and Mitchell, Duane A and Ospina, Naykky S and Ahmed, Mustafa M and Hogan, William R and Shenkman, Elizabeth A and Guo, Yi and Bian, Jiang and Wu, Yonghui", month = may, year = 2023, copyright = "http://arxiv.org/licenses/nonexclusive-distrib/1.0/", archivePrefix = "arXiv", primaryClass = "cs.CL", eprint = "2305.13523" } ``` <h2>Contact</h2> - Yonghui Wu: [email protected] - Cheng Peng: [email protected]
luodian/bge-m3
luodian
2025-04-28T02:52:23Z
0
0
sentence-transformers
[ "sentence-transformers", "pytorch", "onnx", "xlm-roberta", "feature-extraction", "sentence-similarity", "arxiv:2402.03216", "arxiv:2004.04906", "arxiv:2106.14807", "arxiv:2107.05720", "arxiv:2004.12832", "license:mit", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-04-28T02:04:53Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity license: mit --- For more details please refer to our github repo: https://github.com/FlagOpen/FlagEmbedding # BGE-M3 ([paper](https://arxiv.org/pdf/2402.03216.pdf), [code](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3)) In this project, we introduce BGE-M3, which is distinguished for its versatility in Multi-Functionality, Multi-Linguality, and Multi-Granularity. - Multi-Functionality: It can simultaneously perform the three common retrieval functionalities of embedding model: dense retrieval, multi-vector retrieval, and sparse retrieval. - Multi-Linguality: It can support more than 100 working languages. - Multi-Granularity: It is able to process inputs of different granularities, spanning from short sentences to long documents of up to 8192 tokens. **Some suggestions for retrieval pipeline in RAG** We recommend to use the following pipeline: hybrid retrieval + re-ranking. - Hybrid retrieval leverages the strengths of various methods, offering higher accuracy and stronger generalization capabilities. A classic example: using both embedding retrieval and the BM25 algorithm. Now, you can try to use BGE-M3, which supports both embedding and sparse retrieval. This allows you to obtain token weights (similar to the BM25) without any additional cost when generate dense embeddings. To use hybrid retrieval, you can refer to [Vespa](https://github.com/vespa-engine/pyvespa/blob/master/docs/sphinx/source/examples/mother-of-all-embedding-models-cloud.ipynb ) and [Milvus](https://github.com/milvus-io/pymilvus/blob/master/examples/hello_hybrid_sparse_dense.py). - As cross-encoder models, re-ranker demonstrates higher accuracy than bi-encoder embedding model. Utilizing the re-ranking model (e.g., [bge-reranker](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker), [bge-reranker-v2](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/llm_reranker)) after retrieval can further filter the selected text. ## News: - 2024/7/1: **We update the MIRACL evaluation results of BGE-M3**. To reproduce the new results, you can refer to: [bge-m3_miracl_2cr](https://huggingface.co/datasets/hanhainebula/bge-m3_miracl_2cr). We have also updated our [paper](https://arxiv.org/pdf/2402.03216) on arXiv. <details> <summary> Details </summary> The previous test results were lower because we mistakenly removed the passages that have the same id as the query from the search results. After correcting this mistake, the overall performance of BGE-M3 on MIRACL is higher than the previous results, but the experimental conclusion remains unchanged. The other results are not affected by this mistake. To reproduce the previous lower results, you need to add the `--remove-query` parameter when using `pyserini.search.faiss` or `pyserini.search.lucene` to search the passages. </details> - 2024/3/20: **Thanks Milvus team!** Now you can use hybrid retrieval of bge-m3 in Milvus: [pymilvus/examples /hello_hybrid_sparse_dense.py](https://github.com/milvus-io/pymilvus/blob/master/examples/hello_hybrid_sparse_dense.py). - 2024/3/8: **Thanks for the [experimental results](https://towardsdatascience.com/openai-vs-open-source-multilingual-embedding-models-e5ccb7c90f05) from @[Yannael](https://huggingface.co/Yannael). In this benchmark, BGE-M3 achieves top performance in both English and other languages, surpassing models such as OpenAI.** - 2024/3/2: Release unified fine-tuning [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/unified_finetune) and [data](https://huggingface.co/datasets/Shitao/bge-m3-data) - 2024/2/6: We release the [MLDR](https://huggingface.co/datasets/Shitao/MLDR) (a long document retrieval dataset covering 13 languages) and [evaluation pipeline](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB/MLDR). - 2024/2/1: **Thanks for the excellent tool from Vespa.** You can easily use multiple modes of BGE-M3 following this [notebook](https://github.com/vespa-engine/pyvespa/blob/master/docs/sphinx/source/examples/mother-of-all-embedding-models-cloud.ipynb) ## Specs - Model | Model Name | Dimension | Sequence Length | Introduction | |:----:|:---:|:---:|:---:| | [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) | 1024 | 8192 | multilingual; unified fine-tuning (dense, sparse, and colbert) from bge-m3-unsupervised| | [BAAI/bge-m3-unsupervised](https://huggingface.co/BAAI/bge-m3-unsupervised) | 1024 | 8192 | multilingual; contrastive learning from bge-m3-retromae | | [BAAI/bge-m3-retromae](https://huggingface.co/BAAI/bge-m3-retromae) | -- | 8192 | multilingual; extend the max_length of [xlm-roberta](https://huggingface.co/FacebookAI/xlm-roberta-large) to 8192 and further pretrained via [retromae](https://github.com/staoxiao/RetroMAE)| | [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | 1024 | 512 | English model | | [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | 768 | 512 | English model | | [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | 384 | 512 | English model | - Data | Dataset | Introduction | |:----------------------------------------------------------:|:-------------------------------------------------:| | [MLDR](https://huggingface.co/datasets/Shitao/MLDR) | Docuemtn Retrieval Dataset, covering 13 languages | | [bge-m3-data](https://huggingface.co/datasets/Shitao/bge-m3-data) | Fine-tuning data used by bge-m3 | ## FAQ **1. Introduction for different retrieval methods** - Dense retrieval: map the text into a single embedding, e.g., [DPR](https://arxiv.org/abs/2004.04906), [BGE-v1.5](https://github.com/FlagOpen/FlagEmbedding) - Sparse retrieval (lexical matching): a vector of size equal to the vocabulary, with the majority of positions set to zero, calculating a weight only for tokens present in the text. e.g., BM25, [unicoil](https://arxiv.org/pdf/2106.14807.pdf), and [splade](https://arxiv.org/abs/2107.05720) - Multi-vector retrieval: use multiple vectors to represent a text, e.g., [ColBERT](https://arxiv.org/abs/2004.12832). **2. How to use BGE-M3 in other projects?** For embedding retrieval, you can employ the BGE-M3 model using the same approach as BGE. The only difference is that the BGE-M3 model no longer requires adding instructions to the queries. For hybrid retrieval, you can use [Vespa](https://github.com/vespa-engine/pyvespa/blob/master/docs/sphinx/source/examples/mother-of-all-embedding-models-cloud.ipynb ) and [Milvus](https://github.com/milvus-io/pymilvus/blob/master/examples/hello_hybrid_sparse_dense.py). **3. How to fine-tune bge-M3 model?** You can follow the common in this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) to fine-tune the dense embedding. If you want to fine-tune all embedding function of m3 (dense, sparse and colbert), you can refer to the [unified_fine-tuning example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/unified_finetune) ## Usage Install: ``` git clone https://github.com/FlagOpen/FlagEmbedding.git cd FlagEmbedding pip install -e . ``` or: ``` pip install -U FlagEmbedding ``` ### Generate Embedding for text - Dense Embedding ```python from FlagEmbedding import BGEM3FlagModel model = BGEM3FlagModel('BAAI/bge-m3', use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation sentences_1 = ["What is BGE M3?", "Defination of BM25"] sentences_2 = ["BGE M3 is an embedding model supporting dense retrieval, lexical matching and multi-vector interaction.", "BM25 is a bag-of-words retrieval function that ranks a set of documents based on the query terms appearing in each document"] embeddings_1 = model.encode(sentences_1, batch_size=12, max_length=8192, # If you don't need such a long length, you can set a smaller value to speed up the encoding process. )['dense_vecs'] embeddings_2 = model.encode(sentences_2)['dense_vecs'] similarity = embeddings_1 @ embeddings_2.T print(similarity) # [[0.6265, 0.3477], [0.3499, 0.678 ]] ``` You also can use sentence-transformers and huggingface transformers to generate dense embeddings. Refer to [baai_general_embedding](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/baai_general_embedding#usage) for details. - Sparse Embedding (Lexical Weight) ```python from FlagEmbedding import BGEM3FlagModel model = BGEM3FlagModel('BAAI/bge-m3', use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation sentences_1 = ["What is BGE M3?", "Defination of BM25"] sentences_2 = ["BGE M3 is an embedding model supporting dense retrieval, lexical matching and multi-vector interaction.", "BM25 is a bag-of-words retrieval function that ranks a set of documents based on the query terms appearing in each document"] output_1 = model.encode(sentences_1, return_dense=True, return_sparse=True, return_colbert_vecs=False) output_2 = model.encode(sentences_2, return_dense=True, return_sparse=True, return_colbert_vecs=False) # you can see the weight for each token: print(model.convert_id_to_token(output_1['lexical_weights'])) # [{'What': 0.08356, 'is': 0.0814, 'B': 0.1296, 'GE': 0.252, 'M': 0.1702, '3': 0.2695, '?': 0.04092}, # {'De': 0.05005, 'fin': 0.1368, 'ation': 0.04498, 'of': 0.0633, 'BM': 0.2515, '25': 0.3335}] # compute the scores via lexical mathcing lexical_scores = model.compute_lexical_matching_score(output_1['lexical_weights'][0], output_2['lexical_weights'][0]) print(lexical_scores) # 0.19554901123046875 print(model.compute_lexical_matching_score(output_1['lexical_weights'][0], output_1['lexical_weights'][1])) # 0.0 ``` - Multi-Vector (ColBERT) ```python from FlagEmbedding import BGEM3FlagModel model = BGEM3FlagModel('BAAI/bge-m3', use_fp16=True) sentences_1 = ["What is BGE M3?", "Defination of BM25"] sentences_2 = ["BGE M3 is an embedding model supporting dense retrieval, lexical matching and multi-vector interaction.", "BM25 is a bag-of-words retrieval function that ranks a set of documents based on the query terms appearing in each document"] output_1 = model.encode(sentences_1, return_dense=True, return_sparse=True, return_colbert_vecs=True) output_2 = model.encode(sentences_2, return_dense=True, return_sparse=True, return_colbert_vecs=True) print(model.colbert_score(output_1['colbert_vecs'][0], output_2['colbert_vecs'][0])) print(model.colbert_score(output_1['colbert_vecs'][0], output_2['colbert_vecs'][1])) # 0.7797 # 0.4620 ``` ### Compute score for text pairs Input a list of text pairs, you can get the scores computed by different methods. ```python from FlagEmbedding import BGEM3FlagModel model = BGEM3FlagModel('BAAI/bge-m3', use_fp16=True) sentences_1 = ["What is BGE M3?", "Defination of BM25"] sentences_2 = ["BGE M3 is an embedding model supporting dense retrieval, lexical matching and multi-vector interaction.", "BM25 is a bag-of-words retrieval function that ranks a set of documents based on the query terms appearing in each document"] sentence_pairs = [[i,j] for i in sentences_1 for j in sentences_2] print(model.compute_score(sentence_pairs, max_passage_length=128, # a smaller max length leads to a lower latency weights_for_different_modes=[0.4, 0.2, 0.4])) # weights_for_different_modes(w) is used to do weighted sum: w[0]*dense_score + w[1]*sparse_score + w[2]*colbert_score # { # 'colbert': [0.7796499729156494, 0.4621465802192688, 0.4523794651031494, 0.7898575067520142], # 'sparse': [0.195556640625, 0.00879669189453125, 0.0, 0.1802978515625], # 'dense': [0.6259765625, 0.347412109375, 0.349853515625, 0.67822265625], # 'sparse+dense': [0.482503205537796, 0.23454029858112335, 0.2332356721162796, 0.5122477412223816], # 'colbert+sparse+dense': [0.6013619303703308, 0.3255828022956848, 0.32089319825172424, 0.6232916116714478] # } ``` ## Evaluation We provide the evaluation script for [MKQA](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB/MKQA) and [MLDR](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB/MLDR) ### Benchmarks from the open-source community ![avatar](./imgs/others.webp) The BGE-M3 model emerged as the top performer on this benchmark (OAI is short for OpenAI). For more details, please refer to the [article](https://towardsdatascience.com/openai-vs-open-source-multilingual-embedding-models-e5ccb7c90f05) and [Github Repo](https://github.com/Yannael/multilingual-embeddings) ### Our results - Multilingual (Miracl dataset) ![avatar](./imgs/miracl.jpg) - Cross-lingual (MKQA dataset) ![avatar](./imgs/mkqa.jpg) - Long Document Retrieval - MLDR: ![avatar](./imgs/long.jpg) Please note that [MLDR](https://huggingface.co/datasets/Shitao/MLDR) is a document retrieval dataset we constructed via LLM, covering 13 languages, including test set, validation set, and training set. We utilized the training set from MLDR to enhance the model's long document retrieval capabilities. Therefore, comparing baselines with `Dense w.o.long`(fine-tuning without long document dataset) is more equitable. Additionally, this long document retrieval dataset will be open-sourced to address the current lack of open-source multilingual long text retrieval datasets. We believe that this data will be helpful for the open-source community in training document retrieval models. - NarritiveQA: ![avatar](./imgs/nqa.jpg) - Comparison with BM25 We utilized Pyserini to implement BM25, and the test results can be reproduced by this [script](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB/MLDR#bm25-baseline). We tested BM25 using two different tokenizers: one using Lucene Analyzer and the other using the same tokenizer as M3 (i.e., the tokenizer of xlm-roberta). The results indicate that BM25 remains a competitive baseline, especially in long document retrieval. ![avatar](./imgs/bm25.jpg) ## Training - Self-knowledge Distillation: combining multiple outputs from different retrieval modes as reward signal to enhance the performance of single mode(especially for sparse retrieval and multi-vec(colbert) retrival) - Efficient Batching: Improve the efficiency when fine-tuning on long text. The small-batch strategy is simple but effective, which also can used to fine-tune large embedding model. - MCLS: A simple method to improve the performance on long text without fine-tuning. If you have no enough resource to fine-tuning model with long text, the method is useful. Refer to our [report](https://arxiv.org/pdf/2402.03216.pdf) for more details. ## Acknowledgement Thanks to the authors of open-sourced datasets, including Miracl, MKQA, NarritiveQA, etc. Thanks to the open-sourced libraries like [Tevatron](https://github.com/texttron/tevatron), [Pyserini](https://github.com/castorini/pyserini). ## Citation If you find this repository useful, please consider giving a star :star: and citation ``` @misc{bge-m3, title={BGE M3-Embedding: Multi-Lingual, Multi-Functionality, Multi-Granularity Text Embeddings Through Self-Knowledge Distillation}, author={Jianlv Chen and Shitao Xiao and Peitian Zhang and Kun Luo and Defu Lian and Zheng Liu}, year={2024}, eprint={2402.03216}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
3728km/fined-tune-thai-sentiment
3728km
2025-04-28T02:36:01Z
53
0
transformers
[ "transformers", "tensorboard", "safetensors", "camembert", "text-classification", "generated_from_trainer", "base_model:airesearch/wangchanberta-base-att-spm-uncased", "base_model:finetune:airesearch/wangchanberta-base-att-spm-uncased", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-17T07:19:51Z
--- library_name: transformers base_model: airesearch/wangchanberta-base-att-spm-uncased tags: - generated_from_trainer metrics: - accuracy - precision - recall model-index: - name: fined-tune-thai-sentiment results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fined-tune-thai-sentiment This model is a fine-tuned version of [airesearch/wangchanberta-base-att-spm-uncased](https://huggingface.co/airesearch/wangchanberta-base-att-spm-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3544 - Accuracy: 0.9282 - F1-score: 0.9278 - Precision: 0.9276 - Recall: 0.9282 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 181 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1-score | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:---------:|:------:| | 0.8746 | 1.0 | 91 | 0.8613 | 0.6133 | 0.4662 | 0.3761 | 0.6133 | | 0.8086 | 2.0 | 182 | 0.8758 | 0.5746 | 0.4955 | 0.4768 | 0.5746 | | 0.9223 | 3.0 | 273 | 0.9218 | 0.6133 | 0.4662 | 0.3761 | 0.6133 | | 0.8561 | 4.0 | 364 | 0.7430 | 0.6630 | 0.5899 | 0.6325 | 0.6630 | | 0.6694 | 5.0 | 455 | 0.5335 | 0.7845 | 0.7507 | 0.7289 | 0.7845 | | 0.5792 | 6.0 | 546 | 0.4365 | 0.8287 | 0.8227 | 0.8239 | 0.8287 | | 0.3046 | 7.0 | 637 | 0.4033 | 0.8840 | 0.8834 | 0.8930 | 0.8840 | | 0.2004 | 8.0 | 728 | 0.3544 | 0.9282 | 0.9278 | 0.9276 | 0.9282 | | 0.1443 | 9.0 | 819 | 0.4025 | 0.9171 | 0.9180 | 0.9199 | 0.9171 | | 0.0765 | 10.0 | 910 | 0.4116 | 0.9227 | 0.9238 | 0.9269 | 0.9227 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
Sofia-gb/fashionSigLIP-roturas13
Sofia-gb
2025-04-28T02:24:00Z
0
0
transformers
[ "transformers", "safetensors", "feature-extraction", "custom_code", "arxiv:1910.09700", "region:us" ]
feature-extraction
2025-04-28T02:23:06Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
turboderp/c4ai-command-r7b-12-2024-exl3
turboderp
2025-04-28T02:21:08Z
0
0
null
[ "license:cc-by-nc-4.0", "region:us" ]
null
2025-04-28T01:06:34Z
--- license: cc-by-nc-4.0 --- EXL3 quants of [C4AI Command R7B 12-2024](https://huggingface.co/CohereLabs/c4ai-command-r7b-12-2024) [2.00 bits per weight](https://huggingface.co/turboderp/c4ai-command-r7b-12-2024-exl3/tree/2.0bpw) [2.50 bits per weight](https://huggingface.co/turboderp/c4ai-command-r7b-12-2024-exl3/tree/2.5bpw) [3.00 bits per weight](https://huggingface.co/turboderp/c4ai-command-r7b-12-2024-exl3/tree/3.0bpw) [4.00 bits per weight](https://huggingface.co/turboderp/c4ai-command-r7b-12-2024-exl3/tree/4.0bpw) [5.00 bits per weight](https://huggingface.co/turboderp/c4ai-command-r7b-12-2024-exl3/tree/5.0bpw) [6.00 bits per weight](https://huggingface.co/turboderp/c4ai-command-r7b-12-2024-exl3/tree/6.0bpw) [8.00 bits per weight / H8](https://huggingface.co/turboderp/c4ai-command-r7b-12-2024-exl3/tree/8.0bpw_H8) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6383dc174c48969dcf1b4fce/f0FHWJP59ySuyLQcCVSNJ.png)
Ade07882/Women
Ade07882
2025-04-28T02:18:07Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-04-28T02:18:07Z
--- license: apache-2.0 ---
ykhawaja/abc
ykhawaja
2025-04-28T02:16:26Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-04-28T02:16:26Z
--- license: apache-2.0 ---
MyTranslate/m2m100-en-ms-finetuned
MyTranslate
2025-04-28T02:12:12Z
0
0
transformers
[ "transformers", "safetensors", "m2m_100", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2025-04-28T02:07:55Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hypaai/wspr_wazobia_run1_04272025
hypaai
2025-04-28T01:55:21Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "ig", "yo", "en", "ha", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-04-27T17:58:47Z
--- library_name: transformers language: - ig - yo - en - ha license: apache-2.0 base_model: openai/whisper-small tags: - generated_from_trainer model-index: - name: wspr_wazobia_run1_04272025 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wspr_wazobia_run1_04272025 This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - training_steps: 12000 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
XSkills/nllb-200-turkmen-english-lora-adapter
XSkills
2025-04-28T01:48:49Z
0
0
transformers
[ "transformers", "safetensors", "translation", "nllb", "lora", "peft", "turkmen", "tuk", "eng", "dataset:XSkills/turkmen_english_s500", "base_model:facebook/nllb-200-distilled-600M", "base_model:adapter:facebook/nllb-200-distilled-600M", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
translation
2025-04-26T22:24:20Z
--- license: cc-by-nc-4.0 language: - tuk - eng library_name: transformers datasets: - XSkills/turkmen_english_s500 tags: - translation - nllb - lora - peft - turkmen model_name: nllb-200-turkmen-english-lora-adapter pipeline_tag: translation base_model: - facebook/nllb-200-distilled-600M --- # NLLB-200 (600 M) – LoRA fine-tuned for Turkmen ↔ English **Author** : Merdan Durdyyev **Base model** : [`facebook/nllb-200-distilled-600M`](https://huggingface.co/facebook/nllb-200-distilled-600M) **Tuning method** : Low-Rank Adaptation (LoRA) on only the `q_proj` & `v_proj` matrices (≈ 2.4 M trainable → 0.38 % of total params). > I built this checkpoint as the final project for my Deep-Learning class and as a small contribution to the Turkmen AI community, where open-source resources are scarce. --- ## TL;DR & Quick results Try it on [Space demo](https://huggingface.co/spaces/XSkills/nllb-turkmen-english) Article with full technical journey is available [Medium](). ### Test Results | Direction | BLEU ↑ | chrF ↑ | TER ↓ | Test pairs | |-----------|-------:|-------:|------:|-----------:| | **tk → en** | **26.07** | 52.97 | 68.39 | 50 | | **en → tk** | **8.13** | 39.39 | 87.30 | 50 | ### Model Comparison (Fine-tuned vs Original) ### tk → en (Turkmen to English) | Metric | Fine-tuned | Original | |--------|-----------|----------| | BLEU | 26.07 | 26.48 | | chrF | 52.97 | 52.91 | | TER | 68.39 | 69.70 | #### tk → en (Turkmen to English) | Metric | Fine-tuned | Original | |--------|-----------|----------| | BLEU | 26.07 | 26.48 | | chrF | 52.97 | 52.91 | | TER | 68.39 | 69.70 | *Scores computed with sacre BLEU 2.5, chrF, TER on the official `test` split. A separate spreadsheet with **human adequacy/fluency ratings** is available in the article.* --- ## Intended use & scope * **Good for**: research prototypes, student projects, quick experiments on Turkmen text. * **Not for**: commercial MT systems (license is **CC-BY-NC 4.0**), critical medical/legal translation, or production workloads without further validation. --- ## How to use If you want to use merged model visit [nllb-200-turkmen-english-lora](https://huggingface.co/XSkills/nllb-200-turkmen-english-lora/tree/main) ```python from transformers import AutoModelForSeq2SeqLM, AutoTokenizer from peft import PeftModel BASE = "facebook/nllb-200-distilled-600M" ADAPTER = "XSkills/nllb-200-turkmen-english-lora-adapter" tok = AutoTokenizer.from_pretrained(BASE) base = AutoModelForSeq2SeqLM.from_pretrained(BASE) model = PeftModel.from_pretrained(base, ADAPTER) # ← attaches the LoRA weights def tr(text, src="tuk_Latn", tgt="eng_Latn"): tok.src_lang = src ids = tok(text, return_tensors="pt", truncation=True, max_length=128) out = model.generate( **ids, forced_bos_token_id=tok.convert_tokens_to_ids(tgt), max_length=128, num_beams=5 ) return tok.decode(out[0], skip_special_tokens=True) print(tr("Men kitaby okaýaryn.")) # → “I am reading the book.” ``` ## Training data - Dataset : [XSkills/turkmen_english_s500](https://huggingface.co/datasets/XSkills/turkmen_english_s500) 619 parallel sentences (495 train / 62 val / 62 test) of news & official communiqués. - Collecting even this small corpus proved challenging because publicly available Turkmen data are limited. ## Training procedure | Item | Value | |------|-------| | GPU | 1 × NVIDIA A100 40 GB (Google Colab) | | Wall-time | ~ 3 minutes | | Optimiser | AdamW | | Learning rate | 1 × 10⁻⁵, cosine schedule, warm-up 10% | | Epochs | 5 | | Batch size | 4 (train) / 8 (eval) | | Weight-decay | 0.005 | | FP16 | Yes | | LoRA config | `r=16`, `alpha=32`, `dropout=0.05`, modules = `["q_proj","v_proj"]` | ### LoRA Config ```python lora_config = LoraConfig( r=16, lora_alpha=32, target_modules=["q_proj", "v_proj"], lora_dropout=0.05, bias="none", task_type=TaskType.SEQ_2_SEQ_LM, ) ``` ### Training Configuration ```python training_args = Seq2SeqTrainingArguments( output_dir=FINETUNED_DIR, per_device_train_batch_size=4, per_device_eval_batch_size=8, weight_decay=0.005, save_total_limit=3, learning_rate=1e-5, num_train_epochs=5, lr_scheduler_type="cosine", predict_with_generate=True, fp16=True if torch.cuda.is_available() else False, logging_dir="./logs", logging_steps=50, eval_steps=50, save_steps=100, eval_accumulation_steps=2, report_to="tensorboard", warmup_ratio=0.1, metric_for_best_model="eval_bleu", # Use BLEU for model selection greater_is_better=True, ) ``` ## Evaluation Automatic metrics are given in TL;DR. A manual review on 50 random test sentences showed: - Adequacy: 36 / 50 translations judged “Good” or better. - Fluency: 38 / 50 sound natural to a native speaker. *(Full spreadsheet available — ask via contact below.)* ## Limitations & bias - Only 500ish sentences → limited vocabulary & domain coverage. - May hallucinate proper nouns or numbers on longer inputs. - Gender/ politeness nuances not guaranteed. - CC-BY-NC licence forbids commercial use; respect Meta’s original terms. ## Citation ```bibtex @misc{durdyyev2025turkmenNLLBLoRA, title = {LoRA Fine‐tuning of NLLB‐200 for Turkmen–English Translation}, author = {Durdyyev, Merdan}, year = {2025}, url = {https://huggingface.co/XSkills/nllb-200-turkmen-english-lora-adapter} } ``` ## Contact If you have questions, suggestions or want to collaborate, please reach out through [e-mail]([email protected]), [LinkedIn]( https://linkedin.com/in/merdandt) or [Telegram](https://t.me/merdandt). ## Future Work - Try to tune on bigger dataset. - Try to tweak the hyperparameters - Use [sacreBLEU](https://github.com/mjpost/sacrebleu) metric
anassaleh218/perso_Character100_Llama-3.1-8B-bnb-4bit
anassaleh218
2025-04-28T01:30:23Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-04-28T01:30:13Z
--- base_model: unsloth/meta-llama-3.1-8b-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** anassaleh218 - **License:** apache-2.0 - **Finetuned from model :** unsloth/meta-llama-3.1-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
linf545/LLaMA_RAG_lora_lr1e5_epo2_rank8_eLife_0425
linf545
2025-04-28T01:20:30Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-28T01:20:20Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
nbeerbower/EVA-abliterated-TIES-Qwen2.5-72B
nbeerbower
2025-04-28T01:07:31Z
1
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "mergekit", "merge", "conversational", "zho", "eng", "fra", "spa", "por", "deu", "ita", "rus", "jpn", "kor", "vie", "tha", "ara", "arxiv:2306.01708", "base_model:EVA-UNIT-01/EVA-Qwen2.5-72B-v0.2", "base_model:merge:EVA-UNIT-01/EVA-Qwen2.5-72B-v0.2", "base_model:Qwen/Qwen2.5-72B", "base_model:merge:Qwen/Qwen2.5-72B", "base_model:huihui-ai/Qwen2.5-72B-Instruct-abliterated", "base_model:merge:huihui-ai/Qwen2.5-72B-Instruct-abliterated", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-02-08T03:48:51Z
--- base_model: - huihui-ai/Qwen2.5-72B-Instruct-abliterated - Qwen/Qwen2.5-72B - EVA-UNIT-01/EVA-Qwen2.5-72B-v0.2 library_name: transformers tags: - mergekit - merge license: apache-2.0 language: - zho - eng - fra - spa - por - deu - ita - rus - jpn - kor - vie - tha - ara --- # EVA-abliterated-TIES-Qwen2.5-72B This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [Qwen/Qwen2.5-72B](https://huggingface.co/Qwen/Qwen2.5-72B) as a base. ### Models Merged The following models were included in the merge: * [huihui-ai/Qwen2.5-72B-Instruct-abliterated](https://huggingface.co/huihui-ai/Qwen2.5-72B-Instruct-abliterated) * [EVA-UNIT-01/EVA-Qwen2.5-72B-v0.2](https://huggingface.co/EVA-UNIT-01/EVA-Qwen2.5-72B-v0.2) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: huihui-ai/Qwen2.5-72B-Instruct-abliterated parameters: weight: 1 density: 1 - model: EVA-UNIT-01/EVA-Qwen2.5-72B-v0.2 parameters: weight: 1 density: 1 merge_method: ties base_model: Qwen/Qwen2.5-72B parameters: weight: 1 density: 1 normalize: true int8_mask: true dtype: bfloat16 ```
wanlige/QWQ-stock
wanlige
2025-04-28T01:06:18Z
188
8
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "mergekit", "merge", "conversational", "zho", "eng", "fra", "spa", "por", "deu", "ita", "rus", "jpn", "kor", "vie", "tha", "ara", "arxiv:2403.19522", "base_model:Qwen/QwQ-32B", "base_model:merge:Qwen/QwQ-32B", "base_model:Qwen/QwQ-32B-Preview", "base_model:merge:Qwen/QwQ-32B-Preview", "base_model:Qwen/Qwen2.5-32B", "base_model:merge:Qwen/Qwen2.5-32B", "base_model:Qwen/Qwen2.5-32B-Instruct", "base_model:merge:Qwen/Qwen2.5-32B-Instruct", "base_model:huihui-ai/DeepSeek-R1-Distill-Qwen-32B-abliterated", "base_model:merge:huihui-ai/DeepSeek-R1-Distill-Qwen-32B-abliterated", "base_model:maldv/Awqward2.5-32B-Instruct", "base_model:merge:maldv/Awqward2.5-32B-Instruct", "base_model:tanliboy/lambda-qwen2.5-32b-dpo-test", "base_model:merge:tanliboy/lambda-qwen2.5-32b-dpo-test", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-03-06T13:03:32Z
--- base_model: - Qwen/Qwen2.5-32B - Qwen/QwQ-32B - maldv/Awqward2.5-32B-Instruct - huihui-ai/DeepSeek-R1-Distill-Qwen-32B-abliterated - Qwen/QwQ-32B-Preview - Qwen/Qwen2.5-32B-Instruct - tanliboy/lambda-qwen2.5-32b-dpo-test library_name: transformers tags: - mergekit - merge language: - zho - eng - fra - spa - por - deu - ita - rus - jpn - kor - vie - tha - ara --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) as a base. ### Models Merged The following models were included in the merge: * [Qwen/Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B) * [Qwen/QwQ-32B](https://huggingface.co/Qwen/QwQ-32B) * [maldv/Awqward2.5-32B-Instruct](https://huggingface.co/maldv/Awqward2.5-32B-Instruct) * [huihui-ai/DeepSeek-R1-Distill-Qwen-32B-abliterated](https://huggingface.co/huihui-ai/DeepSeek-R1-Distill-Qwen-32B-abliterated) * [Qwen/QwQ-32B-Preview](https://huggingface.co/Qwen/QwQ-32B-Preview) * [tanliboy/lambda-qwen2.5-32b-dpo-test](https://huggingface.co/tanliboy/lambda-qwen2.5-32b-dpo-test) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: Qwen/QwQ-32B - model: huihui-ai/DeepSeek-R1-Distill-Qwen-32B-abliterated - model: Qwen/Qwen2.5-32B - model: maldv/Awqward2.5-32B-Instruct - model: Qwen/Qwen2.5-32B-Instruct - model: Qwen/QwQ-32B-Preview - model: tanliboy/lambda-qwen2.5-32b-dpo-test merge_method: model_stock base_model: Qwen/Qwen2.5-32B-Instruct normalize: true int8_mask: true dtype: bfloat16 ```
Walid-Ahmed/finetuned_falcon_psychology-question-answer
Walid-Ahmed
2025-04-28T01:06:17Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-28T00:57:25Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
cryptoncalls/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-stubby_hardy_cat
cryptoncalls
2025-04-28T01:03:36Z
6
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am stubby hardy cat", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-11T00:30:16Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-stubby_hardy_cat tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am stubby hardy cat - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-stubby_hardy_cat This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="cryptoncalls/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-stubby_hardy_cat", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
DrGutti/sp_r1_5k
DrGutti
2025-04-28T00:59:42Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "grpo", "arxiv:2402.03300", "base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B", "base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B", "endpoints_compatible", "region:us" ]
null
2025-04-28T00:59:38Z
--- base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B library_name: transformers model_name: sp_r1_5k tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for sp_r1_5k This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="DrGutti/sp_r1_5k", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/rmr-schauer-technical-university-of-munich/huggingface/runs/i9gm6wga) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.16.1 - Transformers: 4.51.3 - Pytorch: 2.3.1+rocm5.7 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
withpi/pi_scorer_ce_bert_v3_f_70000
withpi
2025-04-28T00:58:42Z
0
0
transformers
[ "transformers", "safetensors", "modernbert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-04-28T00:57:55Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
1bbypluto/Nostalgic_finetuned
1bbypluto
2025-04-28T00:54:39Z
12
0
null
[ "safetensors", "distilbert", "region:us" ]
null
2025-04-12T13:30:54Z
## Model Card: Nostalgia Detection Classifier ### Model Details - **Model name:** `1bbypluto/Nostalgic_finetuned` - **Base architecture:** `bert-base-uncased` - **Task:** Multi-class text classification (nostalgia intensity & subtype) - **Framework:** 🤗 Transformers (PyTorch) - **License:** CC BY 4.0 --- ### Model Description This model takes a user’s transcript (series of utterances) and predicts one of four labels: - **nostalgic_neutral** - **nostalgic_reminiscent** - **nostalgic_longing** - **not_nostalgic** It was fine-tuned on a custom “Nostalgia Dictionary” corpus developed at the University of Southampton, consisting of 5,000 labeled social-media posts and first-person reflections, each annotated for nostalgia intensity (1–5) and subtype (reflective vs. restorative). --- ### Intended Use - **Primary use case:** Real-time emotion feedback within immersive VR installations. - **Inputs:** Short text transcripts (≤ 200 tokens) from speech-to-text pipelines. - **Outputs:** A single label and confidence score to drive adaptive environmental responses (lighting, sound, haptics). --- ### Factors & Considerations - **Input length:** Performance degrades on inputs longer than ~250 tokens—recommend chunking longer transcripts. - **Dialect & register:** Training data skews toward UK and US English; non-native speakers and dialectal variants may see lower accuracy. - **Emotion granularity:** The four-way label set captures broad nostalgia states; it does **not** detect other emotions (sadness, joy, etc.). --- ### Ethical Considerations - **Privacy:** All transcripts are processed in-memory and discarded immediately after classification. - **Emotional influence:** The model drives real-time environmental changes; care must be taken to avoid overstimulation or reinforcing negative affect. - **Bias:** Under-represented voices (quiet speakers, heavy accents) may be misclassified, risking misinterpretation of emotional state. --- ### Limitations - **Cultural scope:** Focuses on Western nostalgia cues; may not generalize to non-Western cultural contexts, all words are in English. - **Temporal drift:** Language around nostalgia evolves; periodic re-training with fresh data is recommended. - **Granularity:** Cannot distinguish among finer sub-emotions (bittersweet vs. wistful) beyond the four labels. --- ### How to Use ```python from transformers import pipeline classifier = pipeline( "text-classification", model="1bbypluto/Nostalgic_finetuned", return_all_scores=True ) text = "I miss going to the skatepark it was dreamy " result = classifier(text) print(result)
Alphatao/73572d52-1991-493b-b3d6-6e3b16095ccb
Alphatao
2025-04-28T00:50:50Z
0
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "llama", "text-generation", "generated_from_trainer", "axolotl", "dpo", "trl", "conversational", "arxiv:2305.18290", "base_model:unsloth/Llama-3.2-3B-Instruct", "base_model:finetune:unsloth/Llama-3.2-3B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-27T20:25:05Z
--- base_model: unsloth/Llama-3.2-3B-Instruct library_name: transformers model_name: 73572d52-1991-493b-b3d6-6e3b16095ccb tags: - generated_from_trainer - axolotl - dpo - trl licence: license --- # Model Card for 73572d52-1991-493b-b3d6-6e3b16095ccb This model is a fine-tuned version of [unsloth/Llama-3.2-3B-Instruct](https://huggingface.co/unsloth/Llama-3.2-3B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Alphatao/73572d52-1991-493b-b3d6-6e3b16095ccb", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/alphatao-alphatao/Gradients-On-Demand/runs/11wikets) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.12.0.dev0 - Transformers: 4.46.0 - Pytorch: 2.5.0+cu124 - Datasets: 3.0.1 - Tokenizers: 0.20.1 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
kokovova/b55dcbdd-19c3-49bc-8222-371b47cc8de6
kokovova
2025-04-28T00:44:14Z
0
0
peft
[ "peft", "safetensors", "phi3", "axolotl", "generated_from_trainer", "custom_code", "base_model:numind/NuExtract-1.5", "base_model:adapter:numind/NuExtract-1.5", "license:mit", "4-bit", "bitsandbytes", "region:us" ]
null
2025-04-28T00:40:19Z
--- library_name: peft license: mit base_model: numind/NuExtract-v1.5 tags: - axolotl - generated_from_trainer model-index: - name: b55dcbdd-19c3-49bc-8222-371b47cc8de6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: numind/NuExtract-v1.5 bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - c5e591834179f77f_train_data.json ds_type: json format: custom path: /workspace/input_data/c5e591834179f77f_train_data.json type: field_instruction: source field_output: good-translation format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.5 group_by_length: false hub_model_id: kokovova/b55dcbdd-19c3-49bc-8222-371b47cc8de6 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-06 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/c5e591834179f77f_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 9d44feea-543c-4494-a09e-29717884cd47 wandb_project: s56-4 wandb_run: your_name wandb_runid: 9d44feea-543c-4494-a09e-29717884cd47 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # b55dcbdd-19c3-49bc-8222-371b47cc8de6 This model is a fine-tuned version of [numind/NuExtract-v1.5](https://huggingface.co/numind/NuExtract-v1.5) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.6451 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.3035 | 0.0475 | 200 | 1.6451 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
vermoney/6a0deebe-0019-44da-8416-c5fdc3af67be
vermoney
2025-04-28T00:43:28Z
0
0
peft
[ "peft", "safetensors", "phi3", "axolotl", "generated_from_trainer", "custom_code", "base_model:numind/NuExtract-1.5", "base_model:adapter:numind/NuExtract-1.5", "license:mit", "4-bit", "bitsandbytes", "region:us" ]
null
2025-04-28T00:39:19Z
--- library_name: peft license: mit base_model: numind/NuExtract-v1.5 tags: - axolotl - generated_from_trainer model-index: - name: 6a0deebe-0019-44da-8416-c5fdc3af67be results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: numind/NuExtract-v1.5 bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - c5e591834179f77f_train_data.json ds_type: json format: custom path: /workspace/input_data/c5e591834179f77f_train_data.json type: field_instruction: source field_output: good-translation format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.5 group_by_length: false hub_model_id: vermoney/6a0deebe-0019-44da-8416-c5fdc3af67be hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-06 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/c5e591834179f77f_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 9d44feea-543c-4494-a09e-29717884cd47 wandb_project: s56-9 wandb_run: your_name wandb_runid: 9d44feea-543c-4494-a09e-29717884cd47 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 6a0deebe-0019-44da-8416-c5fdc3af67be This model is a fine-tuned version of [numind/NuExtract-v1.5](https://huggingface.co/numind/NuExtract-v1.5) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.6464 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.3087 | 0.0475 | 200 | 1.6464 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
MrRobotoAI/D5
MrRobotoAI
2025-04-28T00:43:14Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2403.19522", "base_model:Blackroot/Llama-3-LongStory-LORA", "base_model:merge:Blackroot/Llama-3-LongStory-LORA", "base_model:Chat-Error/Claude-Kimiko", "base_model:merge:Chat-Error/Claude-Kimiko", "base_model:MrRobotoAI/D4", "base_model:merge:MrRobotoAI/D4", "base_model:Triangle104/Vulkane_120-Days-of-Sodom-LoRA-Mistral-7b", "base_model:merge:Triangle104/Vulkane_120-Days-of-Sodom-LoRA-Mistral-7b", "base_model:athirdpath/BigMistral-11b-GLUE_LORA", "base_model:merge:athirdpath/BigMistral-11b-GLUE_LORA", "base_model:automorphic/LORA_20231221_042843_philosophy", "base_model:merge:automorphic/LORA_20231221_042843_philosophy", "base_model:basilePlus/llama3-8b-schopenhauer", "base_model:merge:basilePlus/llama3-8b-schopenhauer", "base_model:hannahbillo/dpo-llama3-8b-grammar-rules", "base_model:merge:hannahbillo/dpo-llama3-8b-grammar-rules", "base_model:ian00000/Llama-3-8B_offensive_CoT_finetuned", "base_model:merge:ian00000/Llama-3-8B_offensive_CoT_finetuned", "base_model:jrahn/llama-3-8b-claudstruct-v3", "base_model:merge:jrahn/llama-3-8b-claudstruct-v3", "base_model:jspr/llama3-instruct-wordcel-smutrom-8k_peft", "base_model:merge:jspr/llama3-instruct-wordcel-smutrom-8k_peft", "base_model:jspr/llama3-instruct-wordcel-smutrom_peft", "base_model:merge:jspr/llama3-instruct-wordcel-smutrom_peft", "base_model:jspr/llama3-wordcel-smutrom-reorder_peft", "base_model:merge:jspr/llama3-wordcel-smutrom-reorder_peft", "base_model:jspr/llama3-wordcel-smutrom_peft", "base_model:merge:jspr/llama3-wordcel-smutrom_peft", "base_model:jspr/smut_llama_8b_32k_peft_ax", "base_model:merge:jspr/smut_llama_8b_32k_peft_ax", "base_model:jspr/smut_llama_8b_smut_2k_romance_1k_peft", "base_model:merge:jspr/smut_llama_8b_smut_2k_romance_1k_peft", "base_model:jspr/smut_llama_8b_smutromance_32k_peft", "base_model:merge:jspr/smut_llama_8b_smutromance_32k_peft", "base_model:sardukar/physiology-8k-llama3-8b-qlora", "base_model:merge:sardukar/physiology-8k-llama3-8b-qlora", "base_model:sayhan/OpenHermes-2.5-Strix-Philosophy-Mistral-7B-LoRA", "base_model:merge:sayhan/OpenHermes-2.5-Strix-Philosophy-Mistral-7B-LoRA", "base_model:surya-narayanan/human_sexuality", "base_model:merge:surya-narayanan/human_sexuality", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-27T23:25:09Z
--- base_model: - MrRobotoAI/D4 - Blackroot/Llama-3-LongStory-LORA - MrRobotoAI/D4 - sayhan/OpenHermes-2.5-Strix-Philosophy-Mistral-7B-LoRA - MrRobotoAI/D4 - sardukar/physiology-8k-llama3-8b-qlora - MrRobotoAI/D4 - Chat-Error/Claude-Kimiko - MrRobotoAI/D4 - jspr/smut_llama_8b_smutromance_32k_peft - MrRobotoAI/D4 - jspr/llama3-wordcel-smutrom-reorder_peft - MrRobotoAI/D4 - hannahbillo/dpo-llama3-8b-grammar-rules - MrRobotoAI/D4 - surya-narayanan/human_sexuality - MrRobotoAI/D4 - automorphic/LORA_20231221_042843_philosophy - MrRobotoAI/D4 - Triangle104/Vulkane_120-Days-of-Sodom-LoRA-Mistral-7b - MrRobotoAI/D4 - ian00000/Llama-3-8B_offensive_CoT_finetuned - MrRobotoAI/D4 - jspr/llama3-instruct-wordcel-smutrom_peft - MrRobotoAI/D4 - jspr/smut_llama_8b_smut_2k_romance_1k_peft - MrRobotoAI/D4 - athirdpath/BigMistral-11b-GLUE_LORA - MrRobotoAI/D4 - jspr/llama3-instruct-wordcel-smutrom-8k_peft - MrRobotoAI/D4 - MrRobotoAI/D4 - jspr/llama3-wordcel-smutrom_peft - MrRobotoAI/D4 - jrahn/llama-3-8b-claudstruct-v3 - MrRobotoAI/D4 - jspr/smut_llama_8b_32k_peft_ax - MrRobotoAI/D4 - basilePlus/llama3-8b-schopenhauer library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [MrRobotoAI/D4](https://huggingface.co/MrRobotoAI/D4) as a base. ### Models Merged The following models were included in the merge: * [MrRobotoAI/D4](https://huggingface.co/MrRobotoAI/D4) + [Blackroot/Llama-3-LongStory-LORA](https://huggingface.co/Blackroot/Llama-3-LongStory-LORA) * [MrRobotoAI/D4](https://huggingface.co/MrRobotoAI/D4) + [sayhan/OpenHermes-2.5-Strix-Philosophy-Mistral-7B-LoRA](https://huggingface.co/sayhan/OpenHermes-2.5-Strix-Philosophy-Mistral-7B-LoRA) * [MrRobotoAI/D4](https://huggingface.co/MrRobotoAI/D4) + [sardukar/physiology-8k-llama3-8b-qlora](https://huggingface.co/sardukar/physiology-8k-llama3-8b-qlora) * [MrRobotoAI/D4](https://huggingface.co/MrRobotoAI/D4) + [Chat-Error/Claude-Kimiko](https://huggingface.co/Chat-Error/Claude-Kimiko) * [MrRobotoAI/D4](https://huggingface.co/MrRobotoAI/D4) + [jspr/smut_llama_8b_smutromance_32k_peft](https://huggingface.co/jspr/smut_llama_8b_smutromance_32k_peft) * [MrRobotoAI/D4](https://huggingface.co/MrRobotoAI/D4) + [jspr/llama3-wordcel-smutrom-reorder_peft](https://huggingface.co/jspr/llama3-wordcel-smutrom-reorder_peft) * [MrRobotoAI/D4](https://huggingface.co/MrRobotoAI/D4) + [hannahbillo/dpo-llama3-8b-grammar-rules](https://huggingface.co/hannahbillo/dpo-llama3-8b-grammar-rules) * [MrRobotoAI/D4](https://huggingface.co/MrRobotoAI/D4) + [surya-narayanan/human_sexuality](https://huggingface.co/surya-narayanan/human_sexuality) * [MrRobotoAI/D4](https://huggingface.co/MrRobotoAI/D4) + [automorphic/LORA_20231221_042843_philosophy](https://huggingface.co/automorphic/LORA_20231221_042843_philosophy) * [MrRobotoAI/D4](https://huggingface.co/MrRobotoAI/D4) + [Triangle104/Vulkane_120-Days-of-Sodom-LoRA-Mistral-7b](https://huggingface.co/Triangle104/Vulkane_120-Days-of-Sodom-LoRA-Mistral-7b) * [MrRobotoAI/D4](https://huggingface.co/MrRobotoAI/D4) + [ian00000/Llama-3-8B_offensive_CoT_finetuned](https://huggingface.co/ian00000/Llama-3-8B_offensive_CoT_finetuned) * [MrRobotoAI/D4](https://huggingface.co/MrRobotoAI/D4) + [jspr/llama3-instruct-wordcel-smutrom_peft](https://huggingface.co/jspr/llama3-instruct-wordcel-smutrom_peft) * [MrRobotoAI/D4](https://huggingface.co/MrRobotoAI/D4) + [jspr/smut_llama_8b_smut_2k_romance_1k_peft](https://huggingface.co/jspr/smut_llama_8b_smut_2k_romance_1k_peft) * [MrRobotoAI/D4](https://huggingface.co/MrRobotoAI/D4) + [athirdpath/BigMistral-11b-GLUE_LORA](https://huggingface.co/athirdpath/BigMistral-11b-GLUE_LORA) * [MrRobotoAI/D4](https://huggingface.co/MrRobotoAI/D4) + [jspr/llama3-instruct-wordcel-smutrom-8k_peft](https://huggingface.co/jspr/llama3-instruct-wordcel-smutrom-8k_peft) * [MrRobotoAI/D4](https://huggingface.co/MrRobotoAI/D4) + [jspr/llama3-wordcel-smutrom_peft](https://huggingface.co/jspr/llama3-wordcel-smutrom_peft) * [MrRobotoAI/D4](https://huggingface.co/MrRobotoAI/D4) + [jrahn/llama-3-8b-claudstruct-v3](https://huggingface.co/jrahn/llama-3-8b-claudstruct-v3) * [MrRobotoAI/D4](https://huggingface.co/MrRobotoAI/D4) + [jspr/smut_llama_8b_32k_peft_ax](https://huggingface.co/jspr/smut_llama_8b_32k_peft_ax) * [MrRobotoAI/D4](https://huggingface.co/MrRobotoAI/D4) + [basilePlus/llama3-8b-schopenhauer](https://huggingface.co/basilePlus/llama3-8b-schopenhauer) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: MrRobotoAI/D4+athirdpath/BigMistral-11b-GLUE_LORA - model: MrRobotoAI/D4+automorphic/LORA_20231221_042843_philosophy - model: MrRobotoAI/D4+basilePlus/llama3-8b-schopenhauer - model: MrRobotoAI/D4+Blackroot/Llama-3-LongStory-LORA - model: MrRobotoAI/D4+Chat-Error/Claude-Kimiko - model: MrRobotoAI/D4+hannahbillo/dpo-llama3-8b-grammar-rules - model: MrRobotoAI/D4+ian00000/Llama-3-8B_offensive_CoT_finetuned - model: MrRobotoAI/D4+jrahn/llama-3-8b-claudstruct-v3 - model: MrRobotoAI/D4+jspr/llama3-instruct-wordcel-smutrom_peft - model: MrRobotoAI/D4+jspr/llama3-instruct-wordcel-smutrom-8k_peft - model: MrRobotoAI/D4+jspr/llama3-wordcel-smutrom_peft - model: MrRobotoAI/D4+jspr/llama3-wordcel-smutrom-reorder_peft - model: MrRobotoAI/D4+jspr/smut_llama_8b_32k_peft_ax - model: MrRobotoAI/D4+jspr/smut_llama_8b_smut_2k_romance_1k_peft - model: MrRobotoAI/D4+jspr/smut_llama_8b_smutromance_32k_peft - model: MrRobotoAI/D4+sardukar/physiology-8k-llama3-8b-qlora - model: MrRobotoAI/D4+sayhan/OpenHermes-2.5-Strix-Philosophy-Mistral-7B-LoRA - model: MrRobotoAI/D4+surya-narayanan/human_sexuality - model: MrRobotoAI/D4+basilePlus/llama3-8b-schopenhauer - model: MrRobotoAI/D4+Triangle104/Vulkane_120-Days-of-Sodom-LoRA-Mistral-7b merge_method: model_stock base_model: MrRobotoAI/D4 normalize: true dtype: float16 ```
olmeange/deepseek-moe-16b-finetuned_cardiac
olmeange
2025-04-28T00:43:09Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-28T00:43:00Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
linf545/LLaMA_lora_lr1e5_epo1_rank8_eLife_0425
linf545
2025-04-28T00:37:44Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-28T00:37:35Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Somya1834/FactChecker-mistral
Somya1834
2025-04-28T00:36:44Z
0
0
null
[ "safetensors", "text-generation", "conversational", "base_model:mistralai/Mistral-7B-Instruct-v0.1", "base_model:finetune:mistralai/Mistral-7B-Instruct-v0.1", "license:apache-2.0", "region:us" ]
text-generation
2025-04-27T17:15:00Z
--- license: apache-2.0 base_model: - mistralai/Mistral-7B-Instruct-v0.1 pipeline_tag: text-generation ---