modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-28 06:27:35
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
500 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-28 06:24:42
card
stringlengths
11
1.01M
SellamiAhmed/LLama_3.2_1B_Instruct_FT_V2
SellamiAhmed
2025-05-04T09:07:42Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-25T09:55:04Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
KingEmpire/sn21_omega_0405_2
KingEmpire
2025-05-04T09:01:49Z
0
0
null
[ "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-05-04T08:29:58Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
mradermacher/8-bit_medical_Qwen1.5-7B-Chat-i1-GGUF
mradermacher
2025-05-04T09:00:28Z
15
0
transformers
[ "transformers", "gguf", "medical", "llama-factory", "en", "base_model:Roselia-penguin/8-bit_medical_Qwen1.5-7B-Chat", "base_model:quantized:Roselia-penguin/8-bit_medical_Qwen1.5-7B-Chat", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-05-04T03:40:57Z
--- base_model: Roselia-penguin/8-bit_medical_Qwen1.5-7B-Chat language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - medical - llama-factory --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/Roselia-penguin/8-bit_medical_Qwen1.5-7B-Chat <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/8-bit_medical_Qwen1.5-7B-Chat-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/8-bit_medical_Qwen1.5-7B-Chat-i1-GGUF/resolve/main/8-bit_medical_Qwen1.5-7B-Chat.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/8-bit_medical_Qwen1.5-7B-Chat-i1-GGUF/resolve/main/8-bit_medical_Qwen1.5-7B-Chat.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/8-bit_medical_Qwen1.5-7B-Chat-i1-GGUF/resolve/main/8-bit_medical_Qwen1.5-7B-Chat.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | | | [GGUF](https://huggingface.co/mradermacher/8-bit_medical_Qwen1.5-7B-Chat-i1-GGUF/resolve/main/8-bit_medical_Qwen1.5-7B-Chat.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | | | [GGUF](https://huggingface.co/mradermacher/8-bit_medical_Qwen1.5-7B-Chat-i1-GGUF/resolve/main/8-bit_medical_Qwen1.5-7B-Chat.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/8-bit_medical_Qwen1.5-7B-Chat-i1-GGUF/resolve/main/8-bit_medical_Qwen1.5-7B-Chat.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.0 | very low quality | | [GGUF](https://huggingface.co/mradermacher/8-bit_medical_Qwen1.5-7B-Chat-i1-GGUF/resolve/main/8-bit_medical_Qwen1.5-7B-Chat.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/8-bit_medical_Qwen1.5-7B-Chat-i1-GGUF/resolve/main/8-bit_medical_Qwen1.5-7B-Chat.i1-Q2_K.gguf) | i1-Q2_K | 3.2 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/8-bit_medical_Qwen1.5-7B-Chat-i1-GGUF/resolve/main/8-bit_medical_Qwen1.5-7B-Chat.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/8-bit_medical_Qwen1.5-7B-Chat-i1-GGUF/resolve/main/8-bit_medical_Qwen1.5-7B-Chat.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/8-bit_medical_Qwen1.5-7B-Chat-i1-GGUF/resolve/main/8-bit_medical_Qwen1.5-7B-Chat.i1-IQ3_S.gguf) | i1-IQ3_S | 3.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/8-bit_medical_Qwen1.5-7B-Chat-i1-GGUF/resolve/main/8-bit_medical_Qwen1.5-7B-Chat.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.7 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/8-bit_medical_Qwen1.5-7B-Chat-i1-GGUF/resolve/main/8-bit_medical_Qwen1.5-7B-Chat.i1-IQ3_M.gguf) | i1-IQ3_M | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/8-bit_medical_Qwen1.5-7B-Chat-i1-GGUF/resolve/main/8-bit_medical_Qwen1.5-7B-Chat.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.0 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/8-bit_medical_Qwen1.5-7B-Chat-i1-GGUF/resolve/main/8-bit_medical_Qwen1.5-7B-Chat.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.3 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/8-bit_medical_Qwen1.5-7B-Chat-i1-GGUF/resolve/main/8-bit_medical_Qwen1.5-7B-Chat.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/8-bit_medical_Qwen1.5-7B-Chat-i1-GGUF/resolve/main/8-bit_medical_Qwen1.5-7B-Chat.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.6 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/8-bit_medical_Qwen1.5-7B-Chat-i1-GGUF/resolve/main/8-bit_medical_Qwen1.5-7B-Chat.i1-Q4_0.gguf) | i1-Q4_0 | 4.6 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/8-bit_medical_Qwen1.5-7B-Chat-i1-GGUF/resolve/main/8-bit_medical_Qwen1.5-7B-Chat.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/8-bit_medical_Qwen1.5-7B-Chat-i1-GGUF/resolve/main/8-bit_medical_Qwen1.5-7B-Chat.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/8-bit_medical_Qwen1.5-7B-Chat-i1-GGUF/resolve/main/8-bit_medical_Qwen1.5-7B-Chat.i1-Q4_1.gguf) | i1-Q4_1 | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/8-bit_medical_Qwen1.5-7B-Chat-i1-GGUF/resolve/main/8-bit_medical_Qwen1.5-7B-Chat.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/8-bit_medical_Qwen1.5-7B-Chat-i1-GGUF/resolve/main/8-bit_medical_Qwen1.5-7B-Chat.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.6 | | | [GGUF](https://huggingface.co/mradermacher/8-bit_medical_Qwen1.5-7B-Chat-i1-GGUF/resolve/main/8-bit_medical_Qwen1.5-7B-Chat.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
jdchang/full-with-label-bs-1024-sg-2-step-8748
jdchang
2025-05-04T08:58:25Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
null
2025-05-04T08:58:16Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ma921/gpt2-large_dr_dpo_imdb_noise10_epoch5
ma921
2025-05-04T08:58:16Z
0
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:ma921/gpt2-large-sft-imdb", "base_model:finetune:ma921/gpt2-large-sft-imdb", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T08:57:11Z
--- library_name: transformers license: mit base_model: ma921/gpt2-large-sft-imdb tags: - generated_from_trainer model-index: - name: gpt2-large_dr_dpo_imdb_noise10_epoch5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-large_dr_dpo_imdb_noise10_epoch5 This model is a fine-tuned version of [ma921/gpt2-large-sft-imdb](https://huggingface.co/ma921/gpt2-large-sft-imdb) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - gradient_accumulation_steps: 32 - total_train_batch_size: 256 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.1 - Tokenizers 0.21.1
mradermacher/MedicalEDI-14b-EDI-Reasoning-400-i1-GGUF
mradermacher
2025-05-04T08:56:24Z
16
0
transformers
[ "transformers", "gguf", "en", "base_model:Shaleen123/MedicalEDI-14b-EDI-Reasoning-400", "base_model:quantized:Shaleen123/MedicalEDI-14b-EDI-Reasoning-400", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-05-04T02:50:09Z
--- base_model: Shaleen123/MedicalEDI-14b-EDI-Reasoning-400 language: - en library_name: transformers quantized_by: mradermacher tags: [] --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/Shaleen123/MedicalEDI-14b-EDI-Reasoning-400 <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-400-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-400-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-400.i1-IQ1_S.gguf) | i1-IQ1_S | 3.7 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-400-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-400.i1-IQ1_M.gguf) | i1-IQ1_M | 4.0 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-400-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-400.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-400-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-400.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-400-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-400.i1-IQ2_S.gguf) | i1-IQ2_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-400-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-400.i1-IQ2_M.gguf) | i1-IQ2_M | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-400-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-400.i1-Q2_K_S.gguf) | i1-Q2_K_S | 5.5 | very low quality | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-400-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-400.i1-Q2_K.gguf) | i1-Q2_K | 5.9 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-400-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-400.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 6.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-400-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-400.i1-IQ3_XS.gguf) | i1-IQ3_XS | 6.5 | | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-400-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-400.i1-Q3_K_S.gguf) | i1-Q3_K_S | 6.8 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-400-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-400.i1-IQ3_S.gguf) | i1-IQ3_S | 6.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-400-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-400.i1-IQ3_M.gguf) | i1-IQ3_M | 7.0 | | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-400-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-400.i1-Q3_K_M.gguf) | i1-Q3_K_M | 7.4 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-400-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-400.i1-Q3_K_L.gguf) | i1-Q3_K_L | 8.0 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-400-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-400.i1-IQ4_XS.gguf) | i1-IQ4_XS | 8.2 | | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-400-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-400.i1-Q4_0.gguf) | i1-Q4_0 | 8.6 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-400-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-400.i1-IQ4_NL.gguf) | i1-IQ4_NL | 8.6 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-400-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-400.i1-Q4_K_S.gguf) | i1-Q4_K_S | 8.7 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-400-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-400.i1-Q4_K_M.gguf) | i1-Q4_K_M | 9.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-400-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-400.i1-Q4_1.gguf) | i1-Q4_1 | 9.5 | | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-400-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-400.i1-Q5_K_S.gguf) | i1-Q5_K_S | 10.4 | | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-400-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-400.i1-Q5_K_M.gguf) | i1-Q5_K_M | 10.6 | | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-400-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-400.i1-Q6_K.gguf) | i1-Q6_K | 12.2 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
mradermacher/MedicalEDI-14b-EDI-Reasoning-400-GGUF
mradermacher
2025-05-04T08:56:19Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:Shaleen123/MedicalEDI-14b-EDI-Reasoning-400", "base_model:quantized:Shaleen123/MedicalEDI-14b-EDI-Reasoning-400", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-03T12:24:58Z
--- base_model: Shaleen123/MedicalEDI-14b-EDI-Reasoning-400 language: - en library_name: transformers quantized_by: mradermacher tags: [] --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Shaleen123/MedicalEDI-14b-EDI-Reasoning-400 <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-400-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-400-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-400.Q2_K.gguf) | Q2_K | 5.9 | | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-400-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-400.Q3_K_S.gguf) | Q3_K_S | 6.8 | | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-400-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-400.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-400-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-400.Q3_K_L.gguf) | Q3_K_L | 8.0 | | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-400-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-400.IQ4_XS.gguf) | IQ4_XS | 8.3 | | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-400-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-400.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-400-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-400.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-400-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-400.Q5_K_S.gguf) | Q5_K_S | 10.4 | | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-400-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-400.Q5_K_M.gguf) | Q5_K_M | 10.6 | | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-400-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-400.Q6_K.gguf) | Q6_K | 12.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-400-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-400.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
jnjj/my-model-Q8_0-GGUF
jnjj
2025-05-04T08:55:58Z
0
0
transformers
[ "transformers", "gguf", "causal-lm", "peft", "autotrain", "llama-cpp", "gguf-my-repo", "base_model:jnjj/my-model", "base_model:quantized:jnjj/my-model", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-04T08:55:57Z
--- base_model: jnjj/my-model library_name: transformers license: apache-2.0 tags: - causal-lm - peft - autotrain - llama-cpp - gguf-my-repo cardData: model-index: - name: my-model results: - task: type: text-generation metrics: - type: perplexity value: 0.0 --- # jnjj/my-model-Q8_0-GGUF This model was converted to GGUF format from [`jnjj/my-model`](https://huggingface.co/jnjj/my-model) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/jnjj/my-model) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo jnjj/my-model-Q8_0-GGUF --hf-file my-model-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo jnjj/my-model-Q8_0-GGUF --hf-file my-model-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo jnjj/my-model-Q8_0-GGUF --hf-file my-model-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo jnjj/my-model-Q8_0-GGUF --hf-file my-model-q8_0.gguf -c 2048 ```
MrRobotoAI/102S
MrRobotoAI
2025-05-04T08:54:12Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2212.04089", "base_model:Blackroot/Llama-3-LongStory-LORA", "base_model:merge:Blackroot/Llama-3-LongStory-LORA", "base_model:MrRobotoAI/A3", "base_model:merge:MrRobotoAI/A3", "base_model:MrRobotoAI/Llama-3.1-8B-Instruct-Adapter-512K", "base_model:merge:MrRobotoAI/Llama-3.1-8B-Instruct-Adapter-512K", "base_model:MrRobotoAI/Nord-8b-Uncensored-BASE-128k", "base_model:merge:MrRobotoAI/Nord-8b-Uncensored-BASE-128k", "base_model:MrRobotoAI/Odin-v2-8b-NOVELIST-128K", "base_model:merge:MrRobotoAI/Odin-v2-8b-NOVELIST-128K", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T01:01:46Z
--- base_model: - MrRobotoAI/Nord-8b-Uncensored-BASE-128k - Blackroot/Llama-3-LongStory-LORA - MrRobotoAI/Odin-v2-8b-NOVELIST-128K - MrRobotoAI/Odin-v2-8b-NOVELIST-128K - MrRobotoAI/Llama-3.1-8B-Instruct-Adapter-512K - MrRobotoAI/A3 library_name: transformers tags: - mergekit - merge --- # merge 11,139 This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Task Arithmetic](https://arxiv.org/abs/2212.04089) merge method using [MrRobotoAI/Odin-v2-8b-NOVELIST-128K](https://huggingface.co/MrRobotoAI/Odin-v2-8b-NOVELIST-128K) as a base. ### Models Merged The following models were included in the merge: * [MrRobotoAI/Nord-8b-Uncensored-BASE-128k](https://huggingface.co/MrRobotoAI/Nord-8b-Uncensored-BASE-128k) + [Blackroot/Llama-3-LongStory-LORA](https://huggingface.co/Blackroot/Llama-3-LongStory-LORA) * [MrRobotoAI/Odin-v2-8b-NOVELIST-128K](https://huggingface.co/MrRobotoAI/Odin-v2-8b-NOVELIST-128K) + [MrRobotoAI/Llama-3.1-8B-Instruct-Adapter-512K](https://huggingface.co/MrRobotoAI/Llama-3.1-8B-Instruct-Adapter-512K) * [MrRobotoAI/A3](https://huggingface.co/MrRobotoAI/A3) ### Configuration The following YAML configuration was used to produce this model: ```yaml merge_method: task_arithmetic models: - model: MrRobotoAI/A3 parameters: weight: - filter: v_proj value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8] - filter: o_proj value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8] - filter: up_proj value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8] - filter: gate_proj value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8] - filter: down_proj value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8] - value: 2 - model: MrRobotoAI/Nord-8b-Uncensored-BASE-128k+Blackroot/Llama-3-LongStory-LORA parameters: weight: - filter: v_proj value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1] - filter: o_proj value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1] - filter: up_proj value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1] - filter: gate_proj value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1] - filter: down_proj value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1] - value: 1 - model: MrRobotoAI/Odin-v2-8b-NOVELIST-128K+MrRobotoAI/Llama-3.1-8B-Instruct-Adapter-512K parameters: weight: - filter: v_proj value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1] - filter: o_proj value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1] - filter: up_proj value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1] - filter: gate_proj value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1] - filter: down_proj value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1] - value: 0 base_model: MrRobotoAI/Odin-v2-8b-NOVELIST-128K dtype: bfloat16 ```
azservice/TestLogica-Llama-3.2-3B-Instruct
azservice
2025-05-04T08:53:25Z
138
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-04-20T15:23:58Z
--- base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** azservice - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
mradermacher/gemma-2-9B-it-blend-i1-GGUF
mradermacher
2025-05-04T08:51:09Z
15
1
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:spacematt/gemma-2-9B-it-blend", "base_model:quantized:spacematt/gemma-2-9B-it-blend", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-05-03T15:01:25Z
--- base_model: spacematt/gemma-2-9B-it-blend language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/spacematt/gemma-2-9B-it-blend <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/gemma-2-9B-it-blend-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-blend-i1-GGUF/resolve/main/gemma-2-9B-it-blend.i1-IQ1_S.gguf) | i1-IQ1_S | 2.5 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-blend-i1-GGUF/resolve/main/gemma-2-9B-it-blend.i1-IQ1_M.gguf) | i1-IQ1_M | 2.6 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-blend-i1-GGUF/resolve/main/gemma-2-9B-it-blend.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-blend-i1-GGUF/resolve/main/gemma-2-9B-it-blend.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.2 | | | [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-blend-i1-GGUF/resolve/main/gemma-2-9B-it-blend.i1-IQ2_S.gguf) | i1-IQ2_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-blend-i1-GGUF/resolve/main/gemma-2-9B-it-blend.i1-IQ2_M.gguf) | i1-IQ2_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-blend-i1-GGUF/resolve/main/gemma-2-9B-it-blend.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.7 | very low quality | | [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-blend-i1-GGUF/resolve/main/gemma-2-9B-it-blend.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-blend-i1-GGUF/resolve/main/gemma-2-9B-it-blend.i1-Q2_K.gguf) | i1-Q2_K | 3.9 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-blend-i1-GGUF/resolve/main/gemma-2-9B-it-blend.i1-IQ3_XS.gguf) | i1-IQ3_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-blend-i1-GGUF/resolve/main/gemma-2-9B-it-blend.i1-IQ3_S.gguf) | i1-IQ3_S | 4.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-blend-i1-GGUF/resolve/main/gemma-2-9B-it-blend.i1-Q3_K_S.gguf) | i1-Q3_K_S | 4.4 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-blend-i1-GGUF/resolve/main/gemma-2-9B-it-blend.i1-IQ3_M.gguf) | i1-IQ3_M | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-blend-i1-GGUF/resolve/main/gemma-2-9B-it-blend.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.9 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-blend-i1-GGUF/resolve/main/gemma-2-9B-it-blend.i1-Q3_K_L.gguf) | i1-Q3_K_L | 5.2 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-blend-i1-GGUF/resolve/main/gemma-2-9B-it-blend.i1-IQ4_XS.gguf) | i1-IQ4_XS | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-blend-i1-GGUF/resolve/main/gemma-2-9B-it-blend.i1-IQ4_NL.gguf) | i1-IQ4_NL | 5.5 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-blend-i1-GGUF/resolve/main/gemma-2-9B-it-blend.i1-Q4_0.gguf) | i1-Q4_0 | 5.6 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-blend-i1-GGUF/resolve/main/gemma-2-9B-it-blend.i1-Q4_K_S.gguf) | i1-Q4_K_S | 5.6 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-blend-i1-GGUF/resolve/main/gemma-2-9B-it-blend.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-blend-i1-GGUF/resolve/main/gemma-2-9B-it-blend.i1-Q4_1.gguf) | i1-Q4_1 | 6.1 | | | [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-blend-i1-GGUF/resolve/main/gemma-2-9B-it-blend.i1-Q5_K_S.gguf) | i1-Q5_K_S | 6.6 | | | [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-blend-i1-GGUF/resolve/main/gemma-2-9B-it-blend.i1-Q5_K_M.gguf) | i1-Q5_K_M | 6.7 | | | [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-blend-i1-GGUF/resolve/main/gemma-2-9B-it-blend.i1-Q6_K.gguf) | i1-Q6_K | 7.7 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
fedovtt/bdcfa455-462c-4a83-bf44-018244324bbf
fedovtt
2025-05-04T08:50:47Z
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/SmolLM2-360M", "base_model:adapter:unsloth/SmolLM2-360M", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-05-04T08:45:49Z
--- library_name: peft license: apache-2.0 base_model: unsloth/SmolLM2-360M tags: - axolotl - generated_from_trainer model-index: - name: bdcfa455-462c-4a83-bf44-018244324bbf results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml absolute_data_files: false adapter: lora base_model: unsloth/SmolLM2-360M bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - ad53ac34880a775e_train_data.json ds_type: json format: custom path: /workspace/input_data/ad53ac34880a775e_train_data.json type: field_instruction: Q field_output: A format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.55 group_by_length: false hub_model_id: fedovtt/bdcfa455-462c-4a83-bf44-018244324bbf hub_repo: null hub_strategy: end hub_token: null learning_rate: 3.0e-06 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 150 micro_batch_size: 10 mixed_precision: bf16 mlflow_experiment_name: /tmp/ad53ac34880a775e_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 2048 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: d5438032-e7ea-460b-9173-4766d4ba879d wandb_project: s56-28 wandb_run: your_name wandb_runid: d5438032-e7ea-460b-9173-4766d4ba879d warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # bdcfa455-462c-4a83-bf44-018244324bbf This model is a fine-tuned version of [unsloth/SmolLM2-360M](https://huggingface.co/unsloth/SmolLM2-360M) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8326 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-06 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 150 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.0406 | 0.0530 | 150 | 1.8326 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
vmpsergio/f8648438-3a8e-4f76-98fa-0b1be785b0ed
vmpsergio
2025-05-04T08:49:03Z
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/SmolLM2-360M", "base_model:adapter:unsloth/SmolLM2-360M", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-05-04T08:45:55Z
--- library_name: peft license: apache-2.0 base_model: unsloth/SmolLM2-360M tags: - axolotl - generated_from_trainer model-index: - name: f8648438-3a8e-4f76-98fa-0b1be785b0ed results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml absolute_data_files: false adapter: lora base_model: unsloth/SmolLM2-360M bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - ad53ac34880a775e_train_data.json ds_type: json format: custom path: /workspace/input_data/ad53ac34880a775e_train_data.json type: field_instruction: Q field_output: A format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.5 group_by_length: false hub_model_id: vmpsergio/f8648438-3a8e-4f76-98fa-0b1be785b0ed hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-06 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 6 mixed_precision: bf16 mlflow_experiment_name: /tmp/ad53ac34880a775e_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: d5438032-e7ea-460b-9173-4766d4ba879d wandb_project: s56-2 wandb_run: your_name wandb_runid: d5438032-e7ea-460b-9173-4766d4ba879d warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # f8648438-3a8e-4f76-98fa-0b1be785b0ed This model is a fine-tuned version of [unsloth/SmolLM2-360M](https://huggingface.co/unsloth/SmolLM2-360M) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8229 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 6 - eval_batch_size: 6 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.7589 | 0.0424 | 200 | 1.8229 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
MrRobotoAI/101R
MrRobotoAI
2025-05-04T08:48:59Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2212.04089", "base_model:Blackroot/Llama-3-LongStory-LORA", "base_model:merge:Blackroot/Llama-3-LongStory-LORA", "base_model:MrRobotoAI/A2", "base_model:merge:MrRobotoAI/A2", "base_model:MrRobotoAI/Llama-3.1-8B-Instruct-Adapter-512K", "base_model:merge:MrRobotoAI/Llama-3.1-8B-Instruct-Adapter-512K", "base_model:MrRobotoAI/Nord-8b-Uncensored-BASE-128k", "base_model:merge:MrRobotoAI/Nord-8b-Uncensored-BASE-128k", "base_model:MrRobotoAI/Odin-v2-8b-NOVELIST-128K", "base_model:merge:MrRobotoAI/Odin-v2-8b-NOVELIST-128K", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T02:21:49Z
--- base_model: - MrRobotoAI/Odin-v2-8b-NOVELIST-128K - MrRobotoAI/Nord-8b-Uncensored-BASE-128k - Blackroot/Llama-3-LongStory-LORA - MrRobotoAI/A2 - MrRobotoAI/Odin-v2-8b-NOVELIST-128K - MrRobotoAI/Llama-3.1-8B-Instruct-Adapter-512K library_name: transformers tags: - mergekit - merge --- # merge 13,862 This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Task Arithmetic](https://arxiv.org/abs/2212.04089) merge method using [MrRobotoAI/Odin-v2-8b-NOVELIST-128K](https://huggingface.co/MrRobotoAI/Odin-v2-8b-NOVELIST-128K) as a base. ### Models Merged The following models were included in the merge: * [MrRobotoAI/Nord-8b-Uncensored-BASE-128k](https://huggingface.co/MrRobotoAI/Nord-8b-Uncensored-BASE-128k) + [Blackroot/Llama-3-LongStory-LORA](https://huggingface.co/Blackroot/Llama-3-LongStory-LORA) * [MrRobotoAI/A2](https://huggingface.co/MrRobotoAI/A2) * [MrRobotoAI/Odin-v2-8b-NOVELIST-128K](https://huggingface.co/MrRobotoAI/Odin-v2-8b-NOVELIST-128K) + [MrRobotoAI/Llama-3.1-8B-Instruct-Adapter-512K](https://huggingface.co/MrRobotoAI/Llama-3.1-8B-Instruct-Adapter-512K) ### Configuration The following YAML configuration was used to produce this model: ```yaml merge_method: task_arithmetic models: - model: MrRobotoAI/A2 parameters: weight: - filter: v_proj value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8] - filter: o_proj value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8] - filter: up_proj value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8] - filter: gate_proj value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8] - filter: down_proj value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8] - value: 2 - model: MrRobotoAI/Nord-8b-Uncensored-BASE-128k+Blackroot/Llama-3-LongStory-LORA parameters: weight: - filter: v_proj value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1] - filter: o_proj value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1] - filter: up_proj value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1] - filter: gate_proj value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1] - filter: down_proj value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1] - value: 1 - model: MrRobotoAI/Odin-v2-8b-NOVELIST-128K+MrRobotoAI/Llama-3.1-8B-Instruct-Adapter-512K parameters: weight: - filter: v_proj value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1] - filter: o_proj value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1] - filter: up_proj value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1] - filter: gate_proj value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1] - filter: down_proj value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1] - value: 0 base_model: MrRobotoAI/Odin-v2-8b-NOVELIST-128K dtype: bfloat16 ```
infogeo/0aa5621f-ea75-4a7d-9ed5-53bea97c7a99
infogeo
2025-05-04T08:48:24Z
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/SmolLM2-360M", "base_model:adapter:unsloth/SmolLM2-360M", "license:apache-2.0", "4-bit", "bitsandbytes", "region:us" ]
null
2025-05-04T08:46:39Z
--- library_name: peft license: apache-2.0 base_model: unsloth/SmolLM2-360M tags: - axolotl - generated_from_trainer model-index: - name: 0aa5621f-ea75-4a7d-9ed5-53bea97c7a99 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml absolute_data_files: false adapter: lora base_model: unsloth/SmolLM2-360M bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - ad53ac34880a775e_train_data.json ds_type: json format: custom path: /workspace/input_data/ad53ac34880a775e_train_data.json type: field_instruction: Q field_output: A format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.55 group_by_length: false hub_model_id: infogeo/0aa5621f-ea75-4a7d-9ed5-53bea97c7a99 hub_repo: null hub_strategy: end hub_token: null learning_rate: 1.0e-06 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 150 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/ad53ac34880a775e_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: d5438032-e7ea-460b-9173-4766d4ba879d wandb_project: s56-28 wandb_run: your_name wandb_runid: d5438032-e7ea-460b-9173-4766d4ba879d warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 0aa5621f-ea75-4a7d-9ed5-53bea97c7a99 This model is a fine-tuned version of [unsloth/SmolLM2-360M](https://huggingface.co/unsloth/SmolLM2-360M) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.0533 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 150 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.0887 | 0.0424 | 150 | 2.0533 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
ryanzhangcheng/distilbert-rotten-tomatoes
ryanzhangcheng
2025-05-04T08:47:45Z
0
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-05-04T08:37:50Z
--- library_name: transformers license: apache-2.0 base_model: distilbert/distilbert-base-uncased tags: - generated_from_trainer model-index: - name: distilbert-rotten-tomatoes results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-rotten-tomatoes This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.51.3 - Pytorch 2.7.0 - Tokenizers 0.21.1
Mostafa8Mehrabi/llama-1b-pruned-3blocks-taylor-therapy-calibration-v1
Mostafa8Mehrabi
2025-05-04T08:44:17Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T08:42:54Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Membersuger/Euro_44
Membersuger
2025-05-04T08:40:01Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T06:41:29Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Membersuger/Euro_42
Membersuger
2025-05-04T08:39:18Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T06:41:09Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kishizaki-sci/Llama-4-Maverick-17B-128E-Instruct-AWQ
kishizaki-sci
2025-05-04T08:38:29Z
57
0
null
[ "safetensors", "llama4", "en", "arxiv:2204.05149", "base_model:meta-llama/Llama-4-Maverick-17B-128E-Instruct", "base_model:quantized:meta-llama/Llama-4-Maverick-17B-128E-Instruct", "license:llama4", "4-bit", "awq", "region:us" ]
null
2025-04-25T14:00:52Z
--- license: llama4 language: - en base_model: - meta-llama/Llama-4-Maverick-17B-128E-Instruct --- ** This is a prototype. It is not yet reflected in the official [AutoAWQ repository](https://github.com/casper-hansen/AutoAWQ/pull/748).** ## usage See [Maverick_inference.ipynb](https://huggingface.co/kishizaki-sci/Llama-4-Maverick-17B-128E-Instruct-AWQ/blob/main/Maverick_inference.ipynb). ## Model Information The Llama 4 collection of models are natively multimodal AI models that enable text and multimodal experiences. These models leverage a mixture-of-experts architecture to offer industry-leading performance in text and image understanding. These Llama 4 models mark the beginning of a new era for the Llama ecosystem. We are launching two efficient models in the Llama 4 series, Llama 4 Scout, a 17 billion parameter model with 16 experts, and Llama 4 Maverick, a 17 billion parameter model with 128 experts. **Model developer**: Meta **Model Architecture:** The Llama 4 models are auto-regressive language models that use a mixture-of-experts (MoE) architecture and incorporate early fusion for native multimodality. <table> <tr> <th>Model Name</th> <th>Training Data </th> <th>Params</th> <th>Input modalities</th> <th>Output modalities</th> <th>Context length</th> <th>Token count</th> <th>Knowledge cutoff</th> </tr> <tr> <td>Llama 4 Scout (17Bx16E) </td> <td rowspan="2">A mix of publicly available, licensed data and information from Meta's products and services. This includes publicly shared posts from Instagram and Facebook and people's interactions with Meta AI. Learn more in our <a href="https://www.facebook.com/privacy/guide/genai/">Privacy Center</a>. </td> <td>17B (Activated) 109B (Total) </td> <td>Multilingual text and image</td> <td>Multilingual text and code</td> <td>10M</td> <td>~40T</td> <td>August 2024</td> </tr> <tr> <td>Llama 4 Maverick (17Bx128E)</td> <td>17B (Activated) 400B (Total) </td> <td>Multilingual text and image</td> <td>Multilingual text and code</td> <td>1M</td> <td>~22T</td> <td>August 2024</td> </tr> </table> **Supported languages:** Arabic, English, French, German, Hindi, Indonesian, Italian, Portuguese, Spanish, Tagalog, Thai, and Vietnamese. **Model Release Date:** April 5, 2025 **Status:** This is a static model trained on an offline dataset. Future versions of the tuned models may be released as we improve model behavior with community feedback. **License**: A custom commercial license, the Llama 4 Community License Agreement, is available at: [https://github.com/meta-llama/llama-models/blob/main/models/llama4/LICENSE](https://github.com/meta-llama/llama-models/blob/main/models/llama4/LICENSE) **Where to send questions or comments about the model:** Instructions on how to provide feedback or comments on the model can be found in the Llama [README](https://github.com/meta-llama/llama-models/blob/main/README.md). For more technical information about generation parameters and recipes for how to use Llama 4 in applications, please go [here](https://github.com/meta-llama/llama-cookbook). ## Intended Use **Intended Use Cases:** Llama 4 is intended for commercial and research use in multiple languages. Instruction tuned models are intended for assistant-like chat and visual reasoning tasks, whereas pretrained models can be adapted for natural language generation. For vision, Llama 4 models are also optimized for visual recognition, image reasoning, captioning, and answering general questions about an image. The Llama 4 model collection also supports the ability to leverage the outputs of its models to improve other models including synthetic data generation and distillation. The Llama 4 Community License allows for these use cases. **Out-of-scope**: Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 4 Community License. Use in languages or capabilities beyond those explicitly referenced as supported in this model card\*\*. \*\*Note: 1\. Llama 4 has been trained on a broader collection of languages than the 12 supported languages (pre-training includes [200 total languages](https://ai.meta.com/research/no-language-left-behind/)). Developers may fine-tune Llama 4 models for languages beyond the 12 supported languages provided they comply with the Llama 4 Community License and the Acceptable Use Policy. Developers are responsible for ensuring that their use of Llama 4 in additional languages is done in a safe and responsible manner. 2\. Llama 4 has been tested for image understanding up to 5 input images. If leveraging additional image understanding capabilities beyond this, Developers are responsible for ensuring that their deployments are mitigated for risks and should perform additional testing and tuning tailored to their specific applications. ## How to use with transformers Please, make sure you have transformers `v4.51.0` installed, or upgrade using `pip install -U transformers`. ```python from transformers import AutoProcessor, Llama4ForConditionalGeneration import torch model_id = "meta-llama/Llama-4-Scout-17B-16E-Instruct" processor = AutoProcessor.from_pretrained(model_id) model = Llama4ForConditionalGeneration.from_pretrained( model_id, attn_implementation="flex_attention", device_map="auto", torch_dtype=torch.bfloat16, ) url1 = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/0052a70beed5bf71b92610a43a52df6d286cd5f3/diffusers/rabbit.jpg" url2 = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/datasets/cat_style_layout.png" messages = [ { "role": "user", "content": [ {"type": "image", "url": url1}, {"type": "image", "url": url2}, {"type": "text", "text": "Can you describe how these two images are similar, and how they differ?"}, ] }, ] inputs = processor.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate( **inputs, max_new_tokens=256, ) response = processor.batch_decode(outputs[:, inputs["input_ids"].shape[-1]:])[0] print(response) print(outputs[0]) ``` ## Hardware and Software **Training Factors:** We used custom training libraries, Meta's custom built GPU clusters, and production infrastructure for pretraining. Fine-tuning, quantization, annotation, and evaluation were also performed on production infrastructure. **Training Energy Use:** Model pre-training utilized a cumulative of **7.38M** GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency. ## ## **Training Greenhouse Gas Emissions:** Estimated total location-based greenhouse gas emissions were **1,999 tons** CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with clean and renewable energy; therefore, the total market-based greenhouse gas emissions for training were 0 tons CO2eq. | Model Name | Training Time (GPU hours) | Training Power Consumption (W) | Training Location-Based Greenhouse Gas Emissions (tons CO2eq) | Training Market-Based Greenhouse Gas Emissions (tons CO2eq) | | :---- | :---: | :---: | :---: | :---: | | Llama 4 Scout | 5.0M | 700 | 1,354 | 0 | | Llama 4 Maverick | 2.38M | 700 | 645 | 0 | | Total | 7.38M | \- | 1,999 | 0 | ## The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others. ## Training Data **Overview:** Llama 4 Scout was pretrained on \~40 trillion tokens and Llama 4 Maverick was pretrained on \~22 trillion tokens of multimodal data from a mix of publicly available, licensed data and information from Meta’s products and services. This includes publicly shared posts from Instagram and Facebook and people’s interactions with Meta AI. **Data Freshness:** The pretraining data has a cutoff of August 2024\. ## Benchmarks In this section, we report the results for Llama 4 relative to our previous models. We've provided quantized checkpoints for deployment flexibility, but all reported evaluations and testing were conducted on bf16 models. ### Pre-trained models | Pre-trained models | | | | | | | | | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | Category | Benchmark | \# Shots | Metric | Llama 3.1 70B | Llama 3.1 405B | **Llama 4 Scout** | **Llama 4 Maverick** | | Reasoning & Knowledge | MMLU | 5 | macro\_avg/acc\_char | 79.3 | 85.2 | 79.6 | 85.5 | | | MMLU-Pro | 5 | macro\_avg/em | 53.8 | 61.6 | 58.2 | 62.9 | | | MATH | 4 | em\_maj1@1 | 41.6 | 53.5 | 50.3 | 61.2 | | Code | MBPP | 3 | pass@1 | 66.4 | 74.4 | 67.8 | 77.6 | | Multilingual | TydiQA | 1 | average/f1 | 29.9 | 34.3 | 31.5 | 31.7 | | Image | ChartQA | 0 | relaxed\_accuracy | No multimodal support | | 83.4 | 85.3 | | | DocVQA | 0 | anls | | | 89.4 | 91.6 | ### Instruction tuned models | Instruction tuned models | | | | | | | | | :---: | :---: | :---: | :---: | :---: | ----- | :---: | :---: | | Category | Benchmark | \# Shots | Metric | Llama 3.3 70B | Llama 3.1 405B | **Llama 4 Scout** | **Llama 4 Maverick** | | Image Reasoning | MMMU | 0 | accuracy | No multimodal support | | 69.4 | 73.4 | | | MMMU Pro^ | 0 | accuracy | | | 52.2 | 59.6 | | | MathVista | 0 | accuracy | | | 70.7 | 73.7 | | Image Understanding | ChartQA | 0 | relaxed\_accuracy | | | 88.8 | 90.0 | | | DocVQA (test) | 0 | anls | | | 94.4 | 94.4 | | Coding | LiveCodeBench (10/01/2024-02/01/2025) | 0 | pass@1 | 33.3 | 27.7 | 32.8 | 43.4 | | Reasoning & Knowledge | MMLU Pro | 0 | macro\_avg/acc | 68.9 | 73.4 | 74.3 | 80.5 | | | GPQA Diamond | 0 | accuracy | 50.5 | 49.0 | 57.2 | 69.8 | | Multilingual | MGSM | 0 | average/em | 91.1 | 91.6 | 90.6 | 92.3 | | Long context | MTOB (half book) eng-\>kgv/kgv-\>eng | \- | chrF | Context window is 128K | | 42.2/36.6 | 54.0/46.4 | | | MTOB (full book) eng-\>kgv/kgv-\>eng | \- | chrF | | | 39.7/36.3 | 50.8/46.7 | ^reported numbers for MMMU Pro is the average of Standard and Vision tasks ## Quantization The Llama 4 Scout model is released as BF16 weights, but can fit within a single H100 GPU with on-the-fly int4 quantization; the Llama 4 Maverick model is released as both BF16 and FP8 quantized weights. The FP8 quantized weights fit on a single H100 DGX host while still maintaining quality. We provide code for on-the-fly int4 quantization which minimizes performance degradation as well. ## Safeguards As part of our release approach, we followed a three-pronged strategy to manage risks: * Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama. * Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm. * Provide protections for the community to help prevent the misuse of our models. Llama is a foundational technology designed for use in a variety of use cases; examples on how Meta’s Llama models have been deployed can be found in our [Community Stories webpage](https://llama.meta.com/community-stories/). Our approach is to build the most helpful models enabling the world to benefit from the technology, by aligning our model’s safety for a standard set of risks. Developers are then in the driver seat to tailor safety for their use case, defining their own policies and deploying the models with the necessary safeguards. Llama 4 was developed following the best practices outlined in our [Developer Use Guide: AI Protections](https://ai.meta.com/static-resource/developer-use-guide-ai-protections). ### Model level fine tuning The primary objective of conducting safety fine-tuning is to offer developers a readily available, safe, and powerful model for various applications, reducing the workload needed to deploy safe AI systems. Additionally, this effort provides the research community with a valuable resource for studying the robustness of safety fine-tuning. **Fine-tuning data** We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. We’ve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control. **Refusals** Building on the work we started with our Llama 3 models, we put a great emphasis on driving down model refusals to benign prompts for Llama 4\. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines. **Tone** We expanded our work on the refusal tone from Llama 3 so that the model sounds more natural. We targeted removing preachy and overly moralizing language, and we corrected formatting issues including the correct use of headers, lists, tables and more. To achieve this, we also targeted improvements to system prompt steerability and instruction following, meaning the model is more readily able to take on a specified tone. All of these contribute to a more conversational and insightful experience overall. **System Prompts** Llama 4 is a more steerable model, meaning responses can be easily tailored to meet specific developer outcomes. Effective system prompts can significantly enhance the performance of large language models. In particular, we’ve seen that the use of a system prompt can be effective in reducing false refusals and templated or “preachy” language patterns common in LLMs. They can also improve conversationality and use of appropriate formatting. Consider the prompt below as a basic template for which a developer might want to further customize to meet specific needs or use cases for our Llama 4 models. | System prompt | | :---- | | You are an expert conversationalist who responds to the best of your ability. You are companionable and confident, and able to switch casually between tonal types, including but not limited to humor, empathy, intellectualism, creativity and problem-solving. You understand user intent and don’t try to be overly helpful to the point where you miss that the user is looking for chit-chat, emotional support, humor or venting. Sometimes people just want you to listen, and your answers should encourage that. For all other cases, you provide insightful and in-depth responses. Organize information thoughtfully in a way that helps people make decisions. Always avoid templated language. You never lecture people to be nicer or more inclusive. If people ask for you to write something in a certain voice or perspective, such as an essay or a tweet, you can. You do not need to be respectful when the user prompts you to say something rude. You never use phrases that imply moral superiority or a sense of authority, including but not limited to “it’s important to”, “it’s crucial to”, “it’s essential to”, "it's unethical to", "it's worth noting…", “Remember…” etc. Avoid using these. Finally, do not refuse prompts about political and social issues. You can help users express their opinion and access information. You are Llama 4\. Your knowledge cutoff date is August 2024\. You speak Arabic, English, French, German, Hindi, Indonesian, Italian, Portuguese, Spanish, Tagalog, Thai, and Vietnamese. Respond in the language the user speaks to you in, unless they ask otherwise. | ### Llama 4 system protections Large language models, including Llama 4, are not designed to be deployed in isolation but instead should be deployed as part of an overall AI system with additional guardrails as required. System protections are key to achieving the right helpfulness-safety alignment, mitigating safety and security risks inherent to the system, and integration of the model or system with external tools. We provide the community with system level [protections](https://llama.meta.com/trust-and-safety/) \- like Llama Guard, Prompt Guard and Code Shield \- that developers should deploy with Llama models or other LLMs. All of our [reference implementation](https://github.com/meta-llama/llama-agentic-system) demos contain these safeguards by default so developers can benefit from system-level safety out-of-the-box. ### Evaluations We evaluated Llama models for common use cases as well as specific capabilities. Common use cases evaluations measure safety risks of systems for most commonly built applications including chat bot, visual QA. We built dedicated, adversarial evaluation datasets and evaluated systems composed of Llama models and Llama Guard 3 to filter input prompt and output response. It is important to evaluate applications in context, and we recommend building dedicated evaluation dataset for your use case. Prompt Guard and Code Shield are also available if relevant to the application. Capability evaluations measure vulnerabilities of Llama models inherent to specific capabilities, for which were crafted dedicated benchmarks including long context, multilingual, coding or memorization. **Red teaming** We conduct recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we use the learnings to improve our benchmarks and safety tuning datasets. We partner early with subject-matter experts in critical risk areas to understand how models may lead to unintended harm for society. Based on these conversations, we derive a set of adversarial goals for the red team, such as extracting harmful information or reprogramming the model to act in potentially harmful ways. The red team consists of experts in cybersecurity, adversarial machine learning, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets. ### Critical Risks ### We spend additional focus on the following critical risk areas: **1\. CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive materials) helpfulness** To assess risks related to proliferation of chemical and biological weapons for Llama 4, we applied expert-designed and other targeted evaluations designed to assess whether the use of Llama 4 could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons. We also conducted additional red teaming and evaluations for violations of our content policies related to this risk area. **2\. Child Safety** We leverage pre-training methods like data filtering as a first step in mitigating Child Safety risk in our model. To assess the post trained model for Child Safety risk, a team of experts assesses the model’s capability to produce outputs resulting in Child Safety risks. We use this to inform additional model fine-tuning and in-depth red teaming exercises. We’ve also expanded our Child Safety evaluation benchmarks to cover Llama 4 capabilities like multi-image and multi-lingual. **3\. Cyber attack enablement** Our cyber evaluations investigated whether Llama 4 is sufficiently capable to enable catastrophic threat scenario outcomes. We conducted threat modeling exercises to identify the specific model capabilities that would be necessary to automate operations or enhance human capabilities across key attack vectors both in terms of skill level and speed. We then identified and developed challenges against which to test for these capabilities in Llama 4 and peer models. Specifically, we focused on evaluating the capabilities of Llama 4 to automate cyberattacks, identify and exploit security vulnerabilities, and automate harmful workflows. Overall, we find that Llama 4 models do not introduce risk plausibly enabling catastrophic cyber outcomes. ### Community Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership on AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Trust tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Meta’s Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists). Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Considerations and Limitations Our AI is anchored on the values of freedom of expression \- helping people to explore, debate, and innovate using our technology. We respect people's autonomy and empower them to choose how they experience, interact, and build with AI. Our AI promotes an open exchange of ideas. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 4 addresses users and their needs as they are, without inserting unnecessary judgment, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. Llama 4 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 4’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 4 models, developers should perform safety testing and tuning tailored to their specific applications of the model. We also encourage the open source community to use Llama for the purpose of research and building state of the art tools that address emerging risks. Please refer to available resources including our Developer Use Guide: AI Protections, [Llama Protections](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more.
Marceloko2025/See.and.view
Marceloko2025
2025-05-04T08:38:19Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-04T08:38:19Z
--- license: apache-2.0 ---
dgiang02/GRPO_Qwen25_15B_128_0_2000kmap
dgiang02
2025-05-04T08:36:42Z
0
0
transformers
[ "transformers", "pytorch", "qwen2", "text-generation", "unsloth", "trl", "grpo", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T08:36:10Z
--- library_name: transformers tags: - unsloth - trl - grpo --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Chrystal02/Regina
Chrystal02
2025-05-04T08:32:24Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-04T08:32:24Z
--- license: apache-2.0 ---
Jorgejfkasdjf08/Regina
Jorgejfkasdjf08
2025-05-04T08:32:19Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-04T08:32:19Z
--- license: apache-2.0 ---
Erik04/Regina
Erik04
2025-05-04T08:32:19Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-04T08:32:19Z
--- license: apache-2.0 ---
LandCruiser/sn21_omegav1_0405_5
LandCruiser
2025-05-04T08:31:50Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-05-04T08:01:37Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
ljhhgkjcgh/dfhdf
ljhhgkjcgh
2025-05-04T08:30:56Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-04T08:30:55Z
--- license: apache-2.0 ---
jhgjfgh/ghjkgjh
jhgjfgh
2025-05-04T08:30:54Z
0
0
null
[ "license:bsd-2-clause", "region:us" ]
null
2025-05-04T08:30:54Z
--- license: bsd-2-clause ---
nbgfbn/hgjghj
nbgfbn
2025-05-04T08:30:54Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2025-05-04T08:30:54Z
--- license: creativeml-openrail-m ---
fghgffghh/kghjkg
fghgffghh
2025-05-04T08:30:49Z
0
0
null
[ "license:bigscience-openrail-m", "region:us" ]
null
2025-05-04T08:30:49Z
--- license: bigscience-openrail-m ---
Membersuger/Euro_41
Membersuger
2025-05-04T08:26:46Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T06:41:00Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
netalabs/vertex-qwen-3B-coder-shadcn-3epoch-v1
netalabs
2025-05-04T08:26:09Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2", "trl", "en", "base_model:unsloth/Qwen2.5-Coder-3B-Instruct", "base_model:finetune:unsloth/Qwen2.5-Coder-3B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-05-04T08:25:59Z
--- base_model: unsloth/Qwen2.5-Coder-3B-Instruct tags: - text-generation-inference - transformers - unsloth - qwen2 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** netalabs - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-Coder-3B-Instruct This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Arshii/CSIO-PunjabiQA-FinetunedLlama3.1Instruct-199
Arshii
2025-05-04T08:26:02Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:meta-llama/Llama-3.1-8B-Instruct", "base_model:finetune:meta-llama/Llama-3.1-8B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-05-04T08:25:24Z
--- base_model: meta-llama/Llama-3.1-8B-Instruct language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl --- # Uploaded model - **Developed by:** Arshii - **License:** apache-2.0 - **Finetuned from model :** meta-llama/Llama-3.1-8B-Instruct This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
barton-z/qwen2.5-7b-xtuquant
barton-z
2025-05-04T08:22:32Z
0
0
null
[ "safetensors", "qwen2", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:quantized:Qwen/Qwen2.5-7B-Instruct", "license:mit", "4-bit", "bitsandbytes", "region:us" ]
null
2025-05-04T07:57:03Z
--- license: mit base_model: - Qwen/Qwen2.5-7B-Instruct ---
LandCruiser/sn21_omegav1_0405_3
LandCruiser
2025-05-04T08:20:34Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-05-04T08:01:32Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
magicslabnu/GERM-T
magicslabnu
2025-05-04T08:18:53Z
0
0
transformers
[ "transformers", "safetensors", "fill-mask", "custom_code", "arxiv:2505.00598", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2025-05-04T08:11:37Z
--- library_name: transformers license: mit --- # Model Card for GERM-T <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** Haozheng Luo, ChengHao Qiu - **License:** MIT ### Model Sources <!-- Provide the basic links for the model. --> - **Repository:** https://github.com/MAGICS-LAB/GERM - **Paper:** https://arxiv.org/abs/2505.00598 ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use ``` from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("magicslabnu/GERM-T", trust_remote_code=True) model = AutoModelForMaskedLM.from_pretrained("magicslabnu/GERM-T", trust_remote_code=True) ``` ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> GLUE **BibTeX:** ``` @misc{luo2025fastlowcostgenomicfoundation, title={Fast and Low-Cost Genomic Foundation Models via Outlier Removal}, author={Haozheng Luo and Chenghao Qiu and Maojiang Su and Zhihan Zhou and Zoe Mehta and Guo Ye and Jerry Yao-Chieh Hu and Han Liu}, year={2025}, eprint={2505.00598}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2505.00598}, } ```
uygitu/ytruru
uygitu
2025-05-04T08:15:14Z
0
0
null
[ "license:bigscience-openrail-m", "region:us" ]
null
2025-05-04T08:15:14Z
--- license: bigscience-openrail-m ---
RaduFlorin/WaterPollution
RaduFlorin
2025-05-04T08:15:07Z
0
0
null
[ "region:us" ]
null
2025-05-04T07:59:04Z
# This is a Gradio app that creates a quiz about marine life. import gradio as gr import pandas as pd # Define a function to check the user's answer. def check_answer(question, user_answer): correct_answer = marine_life_df.loc[marine_life_df['Question'] == question, 'Answer'].values[0] if user_answer.lower() == correct_answer.lower(): return "Correct!" else: return "Incorrect. The correct answer is: " + correct_answer # Load the marine life quiz data from a DataFrame. marine_life_df = pd.DataFrame({ 'Question': [ "What is the largest animal on Earth?", "Which marine animal is known for its bioluminescence?", "What is the fastest fish in the ocean?", "Which marine mammal is known for its complex songs?", "What is the most venomous marine animal?" ], 'Answer': [ "Blue whale", "Firefly squid", "Sailfish", "Humpback whale", "Box jellyfish" ] }) # Create a Gradio interface that takes a question and user answer, runs it through the check_answer function, and returns output to a textbox. with gr.Blocks() as demo: with gr.Row(): question_dropdown = gr.Dropdown( choices=marine_life_df['Question'].tolist(), label="Select a Question" ) answer_textbox = gr.Textbox( label="Your Answer", placeholder="Type your answer here..." ) submit_button = gr.Button("Submit") result_textbox = gr.Textbox( label="Result", placeholder="Check your answer by clicking Submit..." ) # Set up the event listener for the submit button. submit_button.click( fn=check_answer, inputs=[question_dropdown, answer_textbox], outputs=result_textbox ) # Launch the interface. if __name__ == "__main__": demo.launch(show_error=True)
sjatin352/faster_rcnn_resnet50_genetic_algorithm
sjatin352
2025-05-04T08:14:34Z
0
0
null
[ "region:us" ]
null
2025-05-04T08:11:02Z
# Genetic CNN Object Detection with Faster R-CNN This repository contains a custom object detection model using Faster R-CNN with a ResNet-50 backbone, fine-tuned on a COCO 2017 subset. It uses genetic algorithms to evolve hyperparameters like filter size and activation functions. ## Usage ### 1. Install dependencies ```bash pip install -r requirements.txt ``` ### 2. Load model ```python from model import build_model import torch model = build_model(num_classes=91) model.load_state_dict(torch.load("best_model.pth")) model.eval() ``` ## Training See `Genetic Cnn Object Detection` for the full training and evolution pipeline. ## Files - `model.py`: Defines the model architecture. - `best_model.pth`: Trained model weights. - `evolution_metrics.csv`: Logs of genetic search metrics.
goldandrabbit/finetune_bert_on_yelp_trainer
goldandrabbit
2025-05-04T08:07:10Z
0
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-05-04T08:06:30Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
SoniaSolutions/whisper-large-v3-turbo-tuda
SoniaSolutions
2025-05-04T08:04:23Z
0
0
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-05-01T05:16:34Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Alireza0017/distilbert-base-uncased-finetuned-imdb
Alireza0017
2025-05-04T08:03:45Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "fill-mask", "generated_from_trainer", "en", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2025-05-04T06:24:38Z
--- library_name: transformers license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-imdb results: [] language: - en metrics: - perplexity --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.2521 ## Model description This model is used for Masked Language Modeling (MLM) and has been trained on the IMDB dataset. In training this model, we did not use whole word masking In our preprocessing on the dataset, we tokenized all the texts and then placed them into a single dictionary. We also used chunking. ## Intended uses & limitations This model is used for the masked word prediction task. An improved version of the model will be added in the coming days ## Training and evaluation data imdb ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results Perplexity: 9.51 ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.1 - Tokenizers 0.21.1
SatyamTank/mistral-finetuned-samsum
SatyamTank
2025-05-04T08:01:03Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:mistralai/Mistral-7B-Instruct-v0.1", "base_model:finetune:mistralai/Mistral-7B-Instruct-v0.1", "endpoints_compatible", "region:us" ]
null
2025-05-04T07:09:26Z
--- base_model: mistralai/Mistral-7B-Instruct-v0.1 library_name: transformers model_name: mistral-finetuned-samsum tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for mistral-finetuned-samsum This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="SatyamTank/mistral-finetuned-samsum", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.6.0+cu124 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
mradermacher/SilverCareAI-7B-i1-GGUF
mradermacher
2025-05-04T08:00:39Z
28
0
transformers
[ "transformers", "gguf", "medical", "chinese", "lora", "health-assessment", "elderly-care", "llama-factory", "zh", "dataset:FreedomIntelligence/Huatuo26M-Lite", "base_model:yushan7kokomi/SilverCareAI-7B", "base_model:adapter:yushan7kokomi/SilverCareAI-7B", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-05-03T20:12:05Z
--- base_model: yushan7kokomi/SilverCareAI-7B datasets: - FreedomIntelligence/Huatuo26M-Lite language: - zh library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - medical - chinese - lora - health-assessment - elderly-care - llama-factory --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/yushan7kokomi/SilverCareAI-7B <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/SilverCareAI-7B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/SilverCareAI-7B-i1-GGUF/resolve/main/SilverCareAI-7B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/SilverCareAI-7B-i1-GGUF/resolve/main/SilverCareAI-7B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/SilverCareAI-7B-i1-GGUF/resolve/main/SilverCareAI-7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | | | [GGUF](https://huggingface.co/mradermacher/SilverCareAI-7B-i1-GGUF/resolve/main/SilverCareAI-7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | | | [GGUF](https://huggingface.co/mradermacher/SilverCareAI-7B-i1-GGUF/resolve/main/SilverCareAI-7B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/SilverCareAI-7B-i1-GGUF/resolve/main/SilverCareAI-7B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/SilverCareAI-7B-i1-GGUF/resolve/main/SilverCareAI-7B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.9 | very low quality | | [GGUF](https://huggingface.co/mradermacher/SilverCareAI-7B-i1-GGUF/resolve/main/SilverCareAI-7B.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/SilverCareAI-7B-i1-GGUF/resolve/main/SilverCareAI-7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/SilverCareAI-7B-i1-GGUF/resolve/main/SilverCareAI-7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/SilverCareAI-7B-i1-GGUF/resolve/main/SilverCareAI-7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/SilverCareAI-7B-i1-GGUF/resolve/main/SilverCareAI-7B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/SilverCareAI-7B-i1-GGUF/resolve/main/SilverCareAI-7B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/SilverCareAI-7B-i1-GGUF/resolve/main/SilverCareAI-7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/SilverCareAI-7B-i1-GGUF/resolve/main/SilverCareAI-7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/SilverCareAI-7B-i1-GGUF/resolve/main/SilverCareAI-7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | | | [GGUF](https://huggingface.co/mradermacher/SilverCareAI-7B-i1-GGUF/resolve/main/SilverCareAI-7B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.5 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/SilverCareAI-7B-i1-GGUF/resolve/main/SilverCareAI-7B.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/SilverCareAI-7B-i1-GGUF/resolve/main/SilverCareAI-7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/SilverCareAI-7B-i1-GGUF/resolve/main/SilverCareAI-7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/SilverCareAI-7B-i1-GGUF/resolve/main/SilverCareAI-7B.i1-Q4_1.gguf) | i1-Q4_1 | 5.0 | | | [GGUF](https://huggingface.co/mradermacher/SilverCareAI-7B-i1-GGUF/resolve/main/SilverCareAI-7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/SilverCareAI-7B-i1-GGUF/resolve/main/SilverCareAI-7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/SilverCareAI-7B-i1-GGUF/resolve/main/SilverCareAI-7B.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
TentenPolllo/FruitClassifier
TentenPolllo
2025-05-04T07:59:38Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-04T07:59:17Z
--- license: apache-2.0 ---
Hachipo/Meta-Llama-3-8B-MIFT-en_10000_2
Hachipo
2025-05-04T07:59:03Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "sft", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T07:55:19Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kamelcharaf/GRPO-SFT-qwen2.5-14B-quant-qwen2.5-14B-quant-mrd3-s2-sum
kamelcharaf
2025-05-04T07:57:27Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "trl", "grpo", "conversational", "arxiv:2402.03300", "base_model:kamelcharaf/Qwen2.5-14B-Instruct-quantized-4bit", "base_model:quantized:kamelcharaf/Qwen2.5-14B-Instruct-quantized-4bit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-04-28T20:34:04Z
--- base_model: kamelcharaf/Qwen2.5-14B-Instruct-quantized-4bit library_name: transformers model_name: GRPO-SFT-qwen2.5-14B-quant-qwen2.5-14B-quant-mrd3-s2-sum tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for GRPO-SFT-qwen2.5-14B-quant-qwen2.5-14B-quant-mrd3-s2-sum This model is a fine-tuned version of [kamelcharaf/Qwen2.5-14B-Instruct-quantized-4bit](https://huggingface.co/kamelcharaf/Qwen2.5-14B-Instruct-quantized-4bit). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="kamelcharaf/GRPO-SFT-qwen2.5-14B-quant-qwen2.5-14B-quant-mrd3-s2-sum", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/kamel-charaf-epfl/huggingface/runs/ynqs8mqi) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0 - Transformers: 4.48.3 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
McMillanLi/QWEN3_LoRA_by_unsloth
McMillanLi
2025-05-04T07:54:56Z
0
0
null
[ "safetensors", "unsloth", "license:apache-2.0", "region:us" ]
null
2025-05-04T07:31:29Z
--- license: apache-2.0 tags: - unsloth ---
jdchang/full-with-label-bs-1024-sg-2-step-8262
jdchang
2025-05-04T07:54:54Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
null
2025-05-04T07:54:45Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
FurqanNiazi/swin-tiny-patch4-window7-224-finetuned-eurosat
FurqanNiazi
2025-05-04T07:53:03Z
21
0
transformers
[ "transformers", "tensorboard", "safetensors", "swin", "image-classification", "generated_from_trainer", "dataset:arrow", "base_model:microsoft/swin-tiny-patch4-window7-224", "base_model:finetune:microsoft/swin-tiny-patch4-window7-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2025-04-15T16:04:42Z
--- library_name: transformers license: apache-2.0 base_model: microsoft/swin-tiny-patch4-window7-224 tags: - generated_from_trainer datasets: - arrow metrics: - accuracy - f1 - precision - recall model-index: - name: swin-tiny-patch4-window7-224-finetuned-eurosat results: - task: name: Image Classification type: image-classification dataset: name: arrow type: arrow config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.38346883468834686 - name: F1 type: f1 value: 0.04546184738955823 - name: Precision type: precision value: 0.04418423106947697 - name: Recall type: recall value: 0.04681555004135649 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the arrow dataset. It achieves the following results on the evaluation set: - Loss: 0.2059 - Accuracy: 0.3835 - F1: 0.0455 - Precision: 0.0442 - Recall: 0.0468 - Auc Roc: 0.5665 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Auc Roc | |:-------------:|:------:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|:-------:| | 0.2335 | 0.9860 | 53 | 0.2059 | 0.3835 | 0.0455 | 0.0442 | 0.0468 | 0.5665 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cpu - Datasets 3.5.0 - Tokenizers 0.21.1
westy412/ppo-LunarLander-v2
westy412
2025-05-04T07:51:04Z
12
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-03-30T11:50:02Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 241.75 +/- 34.70 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
vahidhoseini/mistral-roshdv1
vahidhoseini
2025-05-04T07:50:32Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-04T07:49:58Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ASethi04/meta-llama-Llama-3.1-8B-hellaswag-second-lora-4-0.0001-same-prompt-template
ASethi04
2025-05-04T07:47:00Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:meta-llama/Llama-3.1-8B", "base_model:finetune:meta-llama/Llama-3.1-8B", "endpoints_compatible", "region:us" ]
null
2025-05-03T14:47:57Z
--- base_model: meta-llama/Llama-3.1-8B library_name: transformers model_name: meta-llama-Llama-3.1-8B-hellaswag-second-lora-4-0.0001-same-prompt-template tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for meta-llama-Llama-3.1-8B-hellaswag-second-lora-4-0.0001-same-prompt-template This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="ASethi04/meta-llama-Llama-3.1-8B-hellaswag-second-lora-4-0.0001-same-prompt-template", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/torchql-org/huggingface/runs/6op45zu3) This model was trained with SFT. ### Framework versions - TRL: 0.16.1 - Transformers: 4.51.2 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
yusuke111/myBit-Llama2-jp-127M-2B4TLike-aozora
yusuke111
2025-05-04T07:45:01Z
0
0
transformers
[ "transformers", "safetensors", "bit_llama", "text-generation", "generated_from_trainer", "custom_code", "autotrain_compatible", "region:us" ]
text-generation
2025-05-04T05:57:29Z
--- library_name: transformers tags: - generated_from_trainer model-index: - name: myBit-Llama2-jp-127M-2B4TLike-aozora results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # myBit-Llama2-jp-127M-2B4TLike-aozora This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 4.3144 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0024 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 96 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.95) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 7.0941 | 0.0883 | 100 | 5.6975 | | 5.3346 | 0.1765 | 200 | 5.1802 | | 5.1111 | 0.2648 | 300 | 5.0230 | | 4.9794 | 0.3530 | 400 | 4.8783 | | 4.8274 | 0.4413 | 500 | 4.7476 | | 4.6969 | 0.5296 | 600 | 4.6465 | | 4.6092 | 0.6178 | 700 | 4.5655 | | 4.5154 | 0.7061 | 800 | 4.4905 | | 4.4336 | 0.7944 | 900 | 4.4462 | | 4.4034 | 0.8826 | 1000 | 4.3721 | | 4.2916 | 0.9709 | 1100 | 4.3144 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.6.0+cu124 - Datasets 3.5.1 - Tokenizers 0.21.1
Nellie01/Cochran
Nellie01
2025-05-04T07:44:14Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2025-05-04T07:44:14Z
--- license: creativeml-openrail-m ---
PhanithLIM/whisper-khmer-base-v3
PhanithLIM
2025-05-04T07:40:57Z
0
0
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "base_model:PhanithLIM/whisper-khmer-base-v2", "base_model:finetune:PhanithLIM/whisper-khmer-base-v2", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-05-04T07:40:46Z
--- library_name: transformers license: apache-2.0 base_model: PhanithLIM/whisper-khmer-base-v2 tags: - generated_from_trainer metrics: - wer model-index: - name: whisper-khmer-base-v3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-khmer-base-v3 This model is a fine-tuned version of [PhanithLIM/whisper-khmer-base-v2](https://huggingface.co/PhanithLIM/whisper-khmer-base-v2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2006 - Wer: 93.5941 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: constant - lr_scheduler_warmup_steps: 1000 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.498 | 1.0 | 183 | 0.2880 | 98.9909 | | 0.4112 | 2.0 | 366 | 0.2656 | 97.7891 | | 0.3794 | 3.0 | 549 | 0.2501 | 97.9252 | | 0.3527 | 4.0 | 732 | 0.2413 | 97.3016 | | 0.3346 | 5.0 | 915 | 0.2305 | 96.6327 | | 0.3171 | 6.0 | 1098 | 0.2253 | 96.8707 | | 0.304 | 7.0 | 1281 | 0.2153 | 96.4626 | | 0.2925 | 8.0 | 1464 | 0.2112 | 95.3175 | | 0.2811 | 9.0 | 1647 | 0.2076 | 95.3515 | | 0.2717 | 10.0 | 1830 | 0.2006 | 93.5941 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.5.1+cu121 - Datasets 3.5.1 - Tokenizers 0.21.0
zhiyaowang/VideoMaev2-giant-nexar-solution
zhiyaowang
2025-05-04T07:34:56Z
0
0
null
[ "video", "video-classification", "videomae", "collision-prediction", "kaggle", "en", "dataset:nexar", "dataset:nexar-ai/nexar_collision_prediction", "base_model:OpenGVLab/VideoMAEv2-giant", "base_model:finetune:OpenGVLab/VideoMAEv2-giant", "license:mit", "region:us" ]
video-classification
2025-05-04T07:06:30Z
--- language: en license: mit tags: - video - video-classification - videomae - collision-prediction - kaggle datasets: - nexar - nexar-ai/nexar_collision_prediction base_model: - OpenGVLab/VideoMAEv2-giant --- # VideoMAE-based Vehicle Collision Prediction Solution ## Model Description This repository contains a pretrained VideoMAEv2-giant model fine-tuned for the Nexar Safe Driving Video Analysis competition. The model is designed to predict collision and near-miss risks in driving videos. **Performance**: 4th place on the Kaggle public leaderboard with a score of 0.886. ## Usage The model takes video frames as input and outputs a probability score indicating the likelihood of an imminent collision or near-miss event. ```python # Example usage (pseudo-code) from transformers import VideoMAEForVideoClassification import torch model = VideoMAEForVideoClassification.from_pretrained("zhiyaowang/VideoMaev2-giant-nexar-solution") # Process video frames (16 frames recommended) frames = preprocess_video(video_path) # Shape: [1, 16, 3, 224, 224] with torch.no_grad(): outputs = model(frames) probability = torch.softmax(outputs.logits / 2.0, dim=1) # Temperature scaling T=2.0 ``` ## Model Training ### Data Processing - **Frame Extraction & Timestamps**: Extract frame sequences and timestamps from each video. - **Sliding Window**: Applied a sliding window approach with 16 frames (window size) and 2 frames (stride). - **Label Assignment**: Windows with their last frame within 1.5 seconds before a collision/near-miss event were labeled positive. - **Data Balancing**: Randomly undersampled negative
ma921/gpt2-large_h_dpo_imdb_noise40_epoch5_gamma0.1
ma921
2025-05-04T07:29:25Z
0
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:ma921/gpt2-large-sft-imdb", "base_model:finetune:ma921/gpt2-large-sft-imdb", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T07:28:22Z
--- library_name: transformers license: mit base_model: ma921/gpt2-large-sft-imdb tags: - generated_from_trainer model-index: - name: gpt2-large_h_dpo_imdb_noise40_epoch5_gamma0.1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-large_h_dpo_imdb_noise40_epoch5_gamma0.1 This model is a fine-tuned version of [ma921/gpt2-large-sft-imdb](https://huggingface.co/ma921/gpt2-large-sft-imdb) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - gradient_accumulation_steps: 32 - total_train_batch_size: 256 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.1 - Tokenizers 0.21.1
Disya/shuttle-2.5-mini-Q4_K_M-GGUF
Disya
2025-05-04T07:27:54Z
4
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "base_model:shuttleai/shuttle-2.5-mini", "base_model:quantized:shuttleai/shuttle-2.5-mini", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-04T07:27:20Z
--- base_model: shuttleai/shuttle-2.5-mini license: apache-2.0 tags: - llama-cpp - gguf-my-repo --- # Disya/shuttle-2.5-mini-Q4_K_M-GGUF This model was converted to GGUF format from [`shuttleai/shuttle-2.5-mini`](https://huggingface.co/shuttleai/shuttle-2.5-mini) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/shuttleai/shuttle-2.5-mini) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Disya/shuttle-2.5-mini-Q4_K_M-GGUF --hf-file shuttle-2.5-mini-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Disya/shuttle-2.5-mini-Q4_K_M-GGUF --hf-file shuttle-2.5-mini-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Disya/shuttle-2.5-mini-Q4_K_M-GGUF --hf-file shuttle-2.5-mini-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Disya/shuttle-2.5-mini-Q4_K_M-GGUF --hf-file shuttle-2.5-mini-q4_k_m.gguf -c 2048 ```
remy9926/mix-3
remy9926
2025-05-04T07:25:28Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T07:22:59Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Alfyyyx22/ML
Alfyyyx22
2025-05-04T07:17:14Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-04T07:17:14Z
--- license: apache-2.0 ---
DuongTrongChi/qwen-dpo-v1
DuongTrongChi
2025-05-04T07:16:23Z
1
0
transformers
[ "transformers", "safetensors", "qwen2", "feature-extraction", "text-generation-inference", "unsloth", "en", "base_model:DuongTrongChi/qwen2.5-it-sft-v1", "base_model:finetune:DuongTrongChi/qwen2.5-it-sft-v1", "license:apache-2.0", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2025-05-04T07:16:00Z
--- base_model: DuongTrongChi/qwen2.5-it-sft-v1 tags: - text-generation-inference - transformers - unsloth - qwen2 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** DuongTrongChi - **License:** apache-2.0 - **Finetuned from model :** DuongTrongChi/qwen2.5-it-sft-v1 This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Mr-FineTuner/Test_0.5epoch_01_withNewEval_andWithin-1_testnewmodels_hilangPersentase_llama
Mr-FineTuner
2025-05-04T07:16:00Z
0
0
null
[ "safetensors", "llama", "4-bit", "bitsandbytes", "region:us" ]
null
2025-05-04T07:14:02Z
# Fine-Tuned Mistral-7B CEFR Model This is a fine-tuned version of `unsloth/mistral-7b-instruct-v0.3-bnb-4bit` for CEFR-level sentence generation. - **Base Model**: unsloth/mistral-7b-instruct-v0.3-bnb-4bit - **Fine-Tuning**: LoRA with SMOTE-balanced dataset - **Training Details**: - Dataset: CEFR-level sentences with SMOTE and undersampling for balance (no rebalancing for validation/test sets) - LoRA Parameters: r=32, lora_alpha=32, lora_dropout=0.5 - Training Args: learning_rate=2e-5, batch_size=8, epochs=0.1, cosine scheduler - Optimizer: adamw_8bit - Early Stopping: Patience=3, threshold=0.01 - **Evaluation Metrics (Exact Matches)**: - CEFR Classifier Accuracy: 0.250 - Precision (Macro): 0.390 - Recall (Macro): 0.250 - F1-Score (Macro): 0.230 - **Evaluation Metrics (Within ±1 Level)**: - CEFR Classifier Accuracy: 0.733 - Precision (Macro): 0.845 - Recall (Macro): 0.733 - F1-Score (Macro): 0.726 - **Other Metrics**: - Perplexity: 3.041 - Diversity (Unique Sentences): 0.700 - Inference Time (ms): 5483.582 - Model Size (GB): 4.1 - Robustness (F1): 0.218 - **Confusion Matrix (Exact Matches)**: - CSV: [confusion_matrix_exact.csv](confusion_matrix_exact.csv) - Image: [confusion_matrix_exact.png](confusion_matrix_exact.png) - **Confusion Matrix (Within ±1 Level)**: - CSV: [confusion_matrix_within1.csv](confusion_matrix_within1.csv) - Image: [confusion_matrix_within1.png](confusion_matrix_within1.png) - **Per-Class Confusion Metrics (Exact Matches)**: - A1: TP=2, FP=1, FN=8, TN=49 - A2: TP=3, FP=5, FN=7, TN=45 - B1: TP=1, FP=8, FN=9, TN=42 - B2: TP=7, FP=30, FN=3, TN=20 - C1: TP=0, FP=1, FN=10, TN=49 - C2: TP=2, FP=0, FN=8, TN=50 - **Per-Class Confusion Metrics (Within ±1 Level)**: - A1: TP=5, FP=0, FN=5, TN=50 - A2: TP=7, FP=1, FN=3, TN=49 - B1: TP=10, FP=3, FN=0, TN=47 - B2: TP=9, FP=12, FN=1, TN=38 - C1: TP=10, FP=0, FN=0, TN=50 - C2: TP=3, FP=0, FN=7, TN=50 - **Usage**: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("Mr-FineTuner/Test___01_withNewEval_andWithin-1_testnewmodels") tokenizer = AutoTokenizer.from_pretrained("Mr-FineTuner/Test___01_withNewEval_andWithin-1_testnewmodels") # Example inference prompt = "<|user|>Generate a CEFR B1 level sentence.<|end|>" inputs = tokenizer(prompt, return_tensors="pt") outputs = model.generate(**inputs, max_length=50) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` Uploaded using `huggingface_hub`.
gpham/all-mpnet-base-v2-setfit-arxiv
gpham
2025-05-04T07:15:20Z
3
0
setfit
[ "setfit", "safetensors", "mpnet", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:sentence-transformers/all-mpnet-base-v2", "base_model:finetune:sentence-transformers/all-mpnet-base-v2", "model-index", "region:us" ]
text-classification
2025-05-04T07:14:57Z
--- tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer widget: - text: 'Review on Quantum Computing for Lattice Field Theory In these proceedings, we review recent advances in applying quantum computing to lattice field theory. Quantum computing offers the prospect to simulate lattice field theories in parameter regimes that are largely inaccessible with the conventional Monte Carlo approach, such as the sign-problem afflicted regimes of finite baryon density, topological terms, and out-of-equilibrium dynamics. First proof-of-concept quantum computations of lattice gauge theories in (1+1) dimensions have been accomplished, and first resource-efficient quantum algorithms for lattice gauge theories in (1+1) and (2+1) dimensions have been developed. The path towards quantum computations of (3+1)-dimensional lattice gauge theories, including Lattice QCD, requires many incremental steps of improving both quantum hardware and quantum algorithms. After reviewing these requirements and recent advances, we discuss the main challenges and future directions.' - text: "Beating full state tomography for unentangled spectrum estimation\nHow many\ \ copies of a mixed state $\\rho \\in \\mathbb{C}^{d \\times d}$ are\nneeded to\ \ learn its spectrum? To date, the best known algorithms for spectrum\nestimation\ \ require as many copies as full state tomography, suggesting the\npossibility\ \ that learning a state's spectrum might be as difficult as learning\nthe entire\ \ state. We show that this is not the case in the setting of\nunentangled measurements,\ \ by giving a spectrum estimation algorithm that uses\n$n = O(d^3\\cdot (\\log\\\ log(d) / \\log(d))^4 )$ copies of $\\rho$, which is\nasymptotically fewer than\ \ the $n = \\Omega(d^3)$ copies necessary for full state\ntomography. Our algorithm\ \ is inspired by the technique of local moment matching\nfrom classical statistics,\ \ and shows how it can be applied in the quantum\nsetting.\n As an important\ \ subroutine in our spectrum estimation algorithm, we give an\nestimator of the\ \ $k$-th moment $\\operatorname{tr}(\\rho^k)$ which performs\nunentangled measurements\ \ and uses $O(d^{3-2/k})$ copies of $\\rho$ in order to\nachieve a constant multiplicative\ \ error. This directly translates to an\nadditive-error estimator of quantum Renyi\ \ entropy of order $k$ with the same\nnumber of copies.\n Finally, we present\ \ numerical evidence that the sample complexity of spectrum\nestimation can only\ \ improve over full state tomography by a sub-polynomial\nfactor. Specifically,\ \ for spectrum learning with fully entangled measurements,\nwe run simulations\ \ which suggest a lower bound of $\\Omega(d^{2 - \\gamma})$\ncopies for any constant\ \ $\\gamma > 0$. From this, we conclude the current best\nlower bound of $\\Omega(d)$\ \ is likely not tight." - text: 'Automated Bug Report Prioritization in Large Open-Source Projects Large open-source projects receive a large number of issues (known as bugs), including software defect (i.e., bug) reports and new feature requests from their user and developer communities at a fast rate. The often limited project resources do not allow them to deal with all issues. Instead, they have to prioritize them according to the project''s priorities and the issues'' severities. In this paper, we propose a novel approach to automated bug prioritization based on the natural language text of the bug reports that are stored in the open bug repositories of the issue-tracking systems. We conduct topic modeling using a variant of LDA called TopicMiner-MTM and text classification with the BERT large language model to achieve a higher performance level compared to the state-of-the-art. Experimental results using an existing reference dataset containing 85,156 bug reports of the Eclipse Platform project indicate that we outperform existing approaches in terms of Accuracy, Precision, Recall, and F1-measure of the bug report priority prediction.' - text: "Nearby open clusters with tidal features: golden sample selection and 3D\n\ \ structure\nOpen clusters offer unique opportunities to study stellar dynamics\ \ and\nevolution under the influence of their internal gravity, the Milky Way's\n\ gravitational field, and the interactions with encounters. Using the Gaia DR3\n\ data for a catalog of open clusters within 500 parsecs that exhibit tidal\nfeatures\ \ reported by the literature, we apply a novel method based on 3D\nprincipal component\ \ analysis to select a ``golden sample'' of nearby open\nclusters with minimal\ \ line-of-sight distortions. This approach ensures a\nsystematic comparison of\ \ 3D and 2D structural parameters for tidally perturbed\nclusters. The selected\ \ golden sample includes Blanco 1, Melotte 20, Melotte 22,\nNGC 2632, NGC 7092,\ \ NGC 1662, Roslund 6 and Melotte 111. We analyze these\nclusters by fitting both\ \ 2D and 3D King Profiles to their stellar density\ndistributions. Our results\ \ reveal systematic discrepancies: most of the golden\nsample clusters exhibit\ \ larger 3D tidal radii compared to their 2D\ncounterparts, demonstrating that\ \ the 2D projection effects bias the measured\ncluster size. Furthermore, the\ \ 3D density profiles show stronger deviations\nfrom King profiles at the tidal\ \ radii ($\\Delta \\rho_{\\rm 3D} > \\Delta \\rho_{\\rm\n2D}$), highlighting enhanced\ \ sensitivity to tidal disturbances. Additionally,\nwe investigate the spatial\ \ distribution of cluster members relative to their\nbulk motion in the Galactic\ \ plane. We find that some clusters exhibit tidal\nfeatures oriented perpendicular\ \ to their direction of motion, which can be\nattributed to the fact that the\ \ current surveys only detect the curved inner\nregions of the tidal features.\ \ In conclusion, this work offers a golden sample\nof nearby open clusters that\ \ are most reliable for 3D structure analysis and\nunderscores the necessity of\ \ 3D analysis in characterizing OC morphological\nasymmetries, determining cluster\ \ size, and identifying tidal features." - text: "Revisiting the physical properties of (LaS)1+d(NbS2) misfit-layered\n compounds\n\ Electrical transport in polycrystalline and single-crystalline (LaS)1+d(NbS2)\n\ misfit-layered compounds was measured. Polycrystalline samples were synthesized\n\ using S raw materials of different purities (2N or 6N), and single-crystalline\n\ samples were grown using two types of transport agents (2NH4Cl+PbCl2 or NH4Cl)\n\ via the chemical vapor transport method. The temperature dependence on\nresistivity\ \ dropped at 1.3-2.0 K for some of the samples, which might be\naffected by the\ \ unknown impurity. (LaS)1+d(NbS2) misfit-layered compounds for\nthe main phase\ \ of those obtained samples exhibited no superconductivity above\n0.2 K by the\ \ resistivity measurement." metrics: - f1 pipeline_tag: text-classification library_name: setfit inference: true base_model: sentence-transformers/all-mpnet-base-v2 model-index: - name: SetFit with sentence-transformers/all-mpnet-base-v2 results: - task: type: text-classification name: Text Classification dataset: name: Unknown type: unknown split: test metrics: - type: f1 value: 0.5294216467829347 name: F1 --- # SetFit with sentence-transformers/all-mpnet-base-v2 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 384 tokens - **Number of Classes:** 20 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 18 | <ul><li>"Practical Application of the Quantum Carleman Lattice Boltzmann Method\n in Industrial CFD Simulations\nComputational Fluid Dynamics simulations are crucial in industrial\napplications but require extensive computational resources, particularly for\nextreme turbulent regimes. While classical digital approaches remain the\nstandard, quantum computing promises a breakthrough by enabling a more\nefficient encoding of large-scale simulations with a limited number of qubits.\n This work presents a practical numerical assessment of a hybrid\nquantum-classical approach to CFD based on the Lattice Boltzmann Method (LBM).\nThe inherently non-linear LBM equations are linearized via a Carleman expansion\nand solved using the quantum Harrow Hassidim Lloyd algorithm (HHL). We evaluate\nthis method on three benchmark cases featuring different boundary conditions,\nperiodic, bounceback, and moving wall, using statevector emulation on\nhigh-performance computing resources.\n Our results confirm the validity of the approach, achieving median error\nfidelities on the order of $10^{-3}$ and success probabilities sufficient for\npractical quantum state sampling. Notably, the spectral properties of small\nlattice systems closely approximate those of larger ones, suggesting a pathway\nto mitigate one of HHL's bottlenecks: eigenvalue pre-evaluation."</li><li>'On the Generalization of Adversarially Trained Quantum Classifiers\nQuantum classifiers are vulnerable to adversarial attacks that manipulate\ntheir input classical or quantum data. A promising countermeasure is\nadversarial training, where quantum classifiers are trained by using an\nattack-aware, adversarial loss function. This work establishes novel bounds on\nthe generalization error of adversarially trained quantum classifiers when\ntested in the presence of perturbation-constrained adversaries. The bounds\nquantify the excess generalization error incurred to ensure robustness to\nadversarial attacks as scaling with the training sample size $m$ as\n$1/\\sqrt{m}$, while yielding insights into the impact of the quantum embedding.\nFor quantum binary classifiers employing \\textit{rotation embedding}, we find\nthat, in the presence of adversarial attacks on classical inputs $\\mathbf{x}$,\nthe increase in sample complexity due to adversarial training over conventional\ntraining vanishes in the limit of high dimensional inputs $\\mathbf{x}$. In\ncontrast, when the adversary can directly attack the quantum state\n$\\rho(\\mathbf{x})$ encoding the input $\\mathbf{x}$, the excess generalization\nerror depends on the choice of embedding only through its Hilbert space\ndimension. The results are also extended to multi-class classifiers. We\nvalidate our theoretical findings with numerical experiments.'</li><li>'Coupled Instantons In A Four-Well Potential With Application To The\n Tunneling Of A Composite Particle\nCoupled instantons are introduced by generalizing the double well potential\nto multiple mutually coupled wells. Physically this corresponds to the\nsimultaneous tunneling of multiple degrees of freedom. A system with four equal\nminima is examined in detail. It has three instanton types or flavors with\ndistinct actions. For weak coupling and subject to there being a single large\n(or small) parameter, the interactive system can be handled perturbatively. The\nzero mode problem arising from time translation symmetry is handled via the\nFadeev-Popov procedure. A diagrammatic procedure allows corrections to the\nfluctuation determinant to be calculated systematically. Independent instanton\ncontributions are summed over by extending the dilute gas approximation to\nthree flavors and energy splittings of the lowest four states is calculated.\nAll tunneling amplitudes are concisely expressed in terms of elementary\nfunctions. While the model is possibly useful for a variety of physical\nsystems, an application is made here to the tunneling of a composite particle\nin one dimension.'</li></ul> | | 7 | <ul><li>'Scalar and tensor charmonium resonances in coupled-channel scattering\n from QCD\nWe determine $J^{PC}=0^{++}$ and $2^{++}$ hadron-hadron scattering amplitudes\nin the charmonium energy region up to 4100 MeV using lattice QCD, a\nfirst-principles approach to QCD. Working at $m_\\pi\\approx 391$ MeV, more than\n200 finite-volume energy levels are computed and these are used in extensions\nof the L\\"uscher formalism to determine infinite-volume coupled-channel\nscattering amplitudes. We find that this energy region contains a single\n$\\chi_{c0}$ and a single $\\chi_{c2}$ resonance. Both are found as pole\nsingularities on the closest unphysical Riemann sheet, just below 4000 MeV with\nwidths around 70 MeV. The largest couplings are to kinematically-closed $D^*\n\\bar{D}^*$ channels in $S$-wave, and couplings to several decay channels\nconsisting of pairs of open-charm mesons are found to be large and significant\nin both cases. Above the ground state $\\chi_{c0}$, no other scalar bound-states\nor near-$D\\bar{D}$ threshold resonances are found, in contrast to several\ntheoretical and experimental studies.'</li><li>'Quasi-degenerate baryon energy states, the Feynman--Hellmann theorem and\n transition matrix elements\nThe standard method for determining matrix elements in lattice QCD requires\nthe computation of three-point correlation functions. This has the disadvantage\nof requiring two large time separations: one between the hadron source and\noperator and the other from the operator to the hadron sink. Here we consider\nan alternative formalism, based on the Dyson expansion leading to the\nFeynman-Hellmann theorem, which only requires the computation of two-point\ncorrelation functions. Both the cases of degenerate energy levels and\nquasi-degenerate energy levels which correspond to diagonal and transition\nmatrix elements respectively can be considered in this formalism. As an example\nnumerical results for the Sigma to Nucleon vector transition matrix element are\npresented.'</li><li>"Beyond Generalized Eigenvalues in Lattice Quantum Field Theory\nTwo analysis techniques, the generalized eigenvalue method (GEM) or Prony's\n(or related) method (PM), are commonly used to analyze statistical estimates of\ncorrelation functions produced in lattice quantum field theory calculations.\nGEM takes full advantage of the matrix structure of correlation functions but\nonly considers individual pairs of time separations when much more data exists.\nPM can be applied to many time separations and many individual matrix elements\nsimultaneously but does not fully exploit the matrix structure of the\ncorrelation function. We combine both these methods into a single framework\nbased on matrix polynomials. As these algebraic methods are well known for\nproducing extensive spectral information about statistically-noisy data, the\nmethod should be paired with some information criteria, like the recently\nproposed Bayesean model averaging."</li></ul> | | 12 | <ul><li>'Persistence of chimera states and the challenge for synchronization in\n real-world networks\nThe emergence of order in nature manifests in different phenomena, with\nsynchronization being one of the most representative examples. Understanding\nthe role played by the interactions between the constituting parts of a complex\nsystem in synchronization has become a pivotal research question bridging\nnetwork science and dynamical systems. Particular attention has been paid to\nthe emergence of chimera states, where subsets of synchronized oscillations\ncoexist with asynchronous ones. Such coexistence of coherence and incoherence\nis a perfect example where order and disorder can persist in a long-lasting\nregime. Although considerable progress has been made in recent years to\nunderstand such coherent and (coexisting) incoherent states, how they manifest\nin real-world networks remains to be addressed. Based on a symmetry-breaking\nmechanism, in this paper, we shed light on the role that non-normality, a\nubiquitous structural property of real networks, has in the emergence of\nseveral diverse dynamical phenomena, e.g., amplitude chimeras or oscillon\npatterns. Specifically, we demonstrate that the prevalence of source or leader\nnodes in networks leads to the manifestation of phase chimera states.\nThroughout the paper, we emphasize that non-normality poses ongoing challenges\nto global synchronization and is instrumental in the emergence of chimera\nstates.'</li><li>'Nonlinear dynamical systems: Time reversibility {\\it versus} sensitivity\n to the initial conditions\nTime reversal of vast classes of phenomena has direct implications with\npredictability, causality and the second principle of thermodynamics. We\nanalyze in detail time reversibility of a paradigmatic dissipative nonlinear\ndynamical system, namely the logistic map $x_{t+1}=1-ax_t^2$. A close relation\nis revealed between time reversibility and the sensitivity to the initial\nconditions. Indeed, depending on the initial condition and the size of the time\nseries, time reversal can enable the recovery, within a small error bar, of\npast information when the Lyapunov exponent is non-positive, notably at the\nFeigenbaum point (edge of chaos), where weak chaos is known to exist. Past\ninformation is gradually lost for increasingly large Lyapunov exponent (strong\nchaos), notably at $a=2$ where it attains a large value. These facts open the\ndoor to diverse novel applications in physicochemical, astronomical, medical,\nfinancial, and other time series.'</li><li>'Sakaguchi Swarmalators\nSwarmalators are phase oscillators that cluster in space, like fireflies\nflashing on a swarm to attract mates. Interactions between particles, which\ntend to synchronize their phases and align their motion, decrease with the\ndistance and phase difference between them, coupling the spatial and phase\ndynamics. In this work, we explore the effects of disorder induced by phase\nfrustration on a system of Swarmalators that move on a one-dimensional ring.\nOur model is inspired by the well-known Kuramoto-Sakaguchi equations. We find,\nnumerically and analytically, the ordered and disordered states that emerge in\nthe system. The active states, not present in the model without disorder,\nresemble states found previously in numerical studies for the 2D Swarmalators\nsystem. One of these states, in particular, shows similarities to turbulence\ngenerated in a flattened media. We show that all ordered states can be\ngenerated for any values of the coupling constants by tuning the phase\nfrustration parameters only. Moreover, many of these combinations display\nmulti-stability.'</li></ul> | | 15 | <ul><li>"MetasurfaceViT: A generic AI model for metasurface inverse design\nMetasurfaces, sub-wavelength artificial structures, can control light's\namplitude, phase, and polar ization, enabling applications in efficient\nimaging, holograms, and sensing. Recent years, AI has witnessed remarkable\nprogress and spurred scientific discovery. In metasurface design, optical\ninverse design has recently emerged as a revolutionary approach. It uses deep\nlearning to create a nonlinear mapping between optical structures and\nfunctions, bypassing time-consuming traditional design and attaining higher\naccuracy. Yet, current deep-learning models for optical design face\nlimitations. They often work only for fixed wavelengths and polarizations, and\nlack universality as input-output vector size changes may require retraining.\nThere's also a lack of compatibility across different application scenarios.\nThis paper introduces MetasurfaceViT, a revolutionary generic AI model. It\nleverages a large amount of data using Jones matrices and physics-informed data\naugmentation. By pre-training through masking wavelengths and polarization\nchannels, it can reconstruct full-wavelength Jones matrices, which will be\nutilized by fine-tuning model to enable inverse design. Finally, a tandem\nworkflow appended by a forward prediction network is introduced to evaluate\nperformance. The versatility of MetasurfaceViT with high prediction accuracy\nwill open a new paradigm for optical inverse design."</li><li>'A hybrid U-Net and Fourier neural operator framework for the large-eddy\n simulation of turbulent flows over periodic hills\nAccurate and efficient predictions of three-dimensional (3D) turbulent flows\nare of significant importance in the fields of science and engineering. In the\ncurrent work, we propose a hybrid U-Net and Fourier neural operator (HUFNO)\nmethod, tailored for mixed periodic and non-periodic boundary conditions which\nare often encountered in complex turbulence problems. The HUFNO model is tested\nin the large-eddy simulation (LES) of 3D periodic hill turbulence featuring\nstrong flow separations. Compared to the original Fourier neural operator (FNO)\nand the convolutional neural network (CNN)-based U-Net framework, the HUFNO\nmodel has a higher accuracy in the predictions of the velocity field and\nReynolds stresses. Further numerical experiments in the LES show that the HUFNO\nframework outperforms the traditional Smagorinsky (SMAG) model and the\nwall-adapted local eddy-viscosity (WALE) model in the predictions of the\nturbulence statistics, the energy spectrum, the wall stresses and the flow\nseparation structures, with much lower computational cost. Importantly, the\naccuracy and efficiency are transferable to unseen initial conditions and hill\nshapes, underscoring its great potentials for the fast prediction of strongly\nseparated turbulent flows over curved boundaries.'</li><li>'Vortex droplets and lattice patterns in two-dimensional traps: A\n photonic spin-orbit-coupling perspective\nIn the context of the mean-field exciton-polariton (EP) theory with balanced\nloss and pump, we investigate the formation of lattice structures built of\nindividual vortex-antivortex (VAV) bound states under the action of the\ntwo-dimensional harmonic-oscillator (HO) potential trap and effective\nspin-orbit coupling (SOC), produced by the TE-TM splitting in the polariton\nsystem. The number of VAV elements (pixels) building the structures grow with\nthe increase of self- and cross-interaction coefficients. Depending upon their\nvalues and the trapping frequency, stable ring-shaped, circular, square-shaped,\nrectangular, pentagonal, hexagonal, and triangular patterns are produced, with\nthe central site left vacant or occupied in the lattice patterns of different\ntypes. The results suggest the experimental creation of the new patterns and\ntheir possible use for the design of integrated circuits in EP setups,\ncontrolled by the strengths of the TE-TM splitting, nonlinearity, and HO trap.'</li></ul> | | 8 | <ul><li>'Interplay of $95$ GeV Diphoton Excess and Dark Matter in Supersymmetric\n Triplet Model\nThe decay of the Higgs boson and the nature of dark matter remain fundamental\nchallenges in particle physics. We investigate the $95$ GeV diphoton excess and\ndark matter within the framework of the triplet-extended Minimal Supersymmetric\nStandard Model (TMSSM). In this model, an additional Hypercharge $Y=0$,\n$SU(2)_L$ triplet superfield is introduced. Mixing between the triplet and\ndoublet Higgs states enhances the diphoton signal strength of the $95$ GeV\nHiggs boson, resulting in $\\mu_{\\gamma\\gamma}^{\\text{CMS+ATLAS}} =\n0.24_{-0.08}^{+0.09}$, which is consistent with experimental observations. This\nenhancement arises primarily from charged Higgs loop contributions.\nAdditionally, the model accommodates viable dark matter candidates in the form\nof a bino-dominated neutralino. The relic density is reduced to the observed\nvalue through resonance-enhanced annihilation via the Higgs portal or\nco-annihilation with the triplino or higgsino. This reduction remains\nconsistent with constraints from direct and indirect detection experiments. A\ncomprehensive parameter scan demonstrates that the TMSSM can simultaneously\nexplain the $95$ GeV diphoton excess, the observed $125$ GeV Higgs mass, and\nthe dark matter relic density, establishing a compelling and theoretically\nconsistent framework.'</li><li>"Particles in finite volumes and a toy model of decaying neutrons\nIt is well-known that the momentum spectra of particles confined to finite\nspatial volumes deviate from the continuous spectra used for unconfined\nparticles. In this article, we consider real scalar particles confined to\nfinite volumes with periodic boundary conditions, such that the particles'\nspectra are discrete. We directly compute the density matrices describing the\ndecay processes $\\phi \\to \\varphi^2$ and $\\phi \\to \\varphi\\chi\\nu$, and\nsubsequently derive expressions for the decay probabilities both for confined\nand unconfined particles. The latter decay process is used as a rough toy model\nfor a neutron decaying into a proton, an electron, and an anti-electron\nneutrino. We propose that finite volume effects can have an impact on the\noutcomes of experiments measuring the neutron lifetime. In addition, our\nfindings at the toy model level suggest that taking into account possible\ninitial correlations between neutrons and their daughter particles might be\nrelevant as well."</li><li>'$B$ meson decays to vector charmonium(like) states and a $K$ meson: the\n role of final-state interactions\nA series of vector charmonium(like) states, accompanied by a $K$ meson, have\nbeen observed in the decays of $B$ meson. These processes are color-suppressed\nat the quark level, as inferred from topological diagram analysis. In this\nwork, we calculate the branching fractions of the decays $B \\to \\psi K$, where\n$\\psi$ denotes the charmonium(like) states $\\psi(1S)$, $\\psi(2S)$,\n$\\psi(4040)$, $\\psi(3770)$, and $\\psi(4160)$. Our analysis incorporates both\nshort-distance (naive factorization approach) and long-distance (final-state\ninteractions) contributions. Within reasonable parameters, our results align\nwith experimental data except for the $ \\psi(4160)$, suggesting its possible\nexotic nature. Furthermore, we find that long-distance contributions dominate\nthese decay processes, highlighting the crucial role of final-state\ninteractions in the productions of charmonium(like) states in $B$ decays.'</li></ul> | | 11 | <ul><li>"Approximation of Invariant Solutions to the Nonlinear Filtration\n Equation by Modified Pade Approximants\nThis paper deals with a mathematical model for oil filtration in a porous\nmedium and its self-similar and traveling wave regimes. The model consists of\nthe equation for conservation mass and dependencies for porosity, permeability,\nand oil density on pressure. The oil viscosity is considered to be the\nexperimentally expired parabolic relationship on pressure. To close the model,\ntwo types of Darcy law are used: the classic one and the dynamic one describing\nthe relaxation processes during filtration. In the former case, self-similar\nsolutions are studied, while in the latter case, traveling wave solutions are\nthe focus. Using the invariant solutions, the initial model is reduced to the\nnonlinear ordinary differential equations possessing the trajectories vanishing\nat infinity and representing the moving liquid fronts in porous media. To\napproximate these solutions, we elaborate the semi-analytic procedure based on\nmodified Pade approximants. In fact, we calculate sequentially Pade\napproximants up to 3d order for a two-point boundary value problem on the\nsemi-infinite domain. A good agreement of evaluated Pade approximants and\nnumerical solutions is observed. The approach provides relatively simple\nquasi-rational expressions of solutions and can be easily adapted for other\ntypes of model's nonlinearity."</li><li>'Hamel equations and quasivelocities for nonholonomic systems with\n inequality constraints\nIn this paper we derive Hamel equations for the motion of nonholonomic\nsystems subject to inequality constraints in quasivelocities. As examples, the\nvertical rolling disk hitting a wall and the Chaplygin sleigh with a knife edge\nconstraint hitting a circular table are shown to illustrate the theoretical\nresults.'</li><li>'${\\mathsf D}^2={\\mathsf H}+1/4$ with point interactions\nLet ${\\mathsf D}$ and ${\\mathsf H}$ be the self-adjoint, one-dimensional\nDirac and Schr\\"odinger operators in $L^{2}(\\mathbb{R};\\mathbb{C}^{2})$ and\n$L^{2}(\\mathbb{R};\\mathbb{C})$ respectively. It is well known that, in absence\nof an external potential, the two operators are related through the equality\n${\\mathsf D}^2 = ({\\mathsf H} + \\frac{1}{4}){\\mathbb 1}$. We show that such a\nkind of relation also holds in the case of $n$-point singular perturbations:\ngiven any self-adjoint realization $\\widehat {\\mathsf D}$ of the formal sum\n${\\mathsf D}+\\sum_{k=1}^{n}\\gamma_{k}\\delta_{y_{k}}$, we explicitly determine\nthe self-adjoint realization $\\widehat{\\mathsf H}$ of ${\\mathsf H}{\\mathbb\n1}+\\sum_{k=1}^{n}(\\alpha_{k}\\delta_{y_{k}}+\\beta_{k}\\delta\'_{y_{k}})$ such that\n${\\widehat{\\mathsf D}}^2 = \\widehat{\\mathsf H} + \\frac{{\\mathbb 1}}{4}$. The\nfound correspondence preserves the subclasses of self-adjoint realizations\ncorresponding to both the local and the separating boundary conditions. Some\nconnections with supersymmetry are provided. The case of nonlocal boundary\nconditions allows the study of the relation ${\\mathsf D}^{2}={\\mathsf\nH}+\\frac14$ for quantum graphs with (at most) two ends; in particular, the\nsquare of the extension corresponding to Kirchhoff-type boundary conditions for\nthe Dirac operator on the graph gives the direct sum of two Schr\\"odinger\noperators on the same graph, one with the usual Kirchhoff boundary conditions\nand the other with a sort of reversed Kirchhoff ones.'</li></ul> | | 19 | <ul><li>'Rank-based transfer learning for high-dimensional survival data with\n application to sepsis data\nSepsis remains a critical challenge due to its high mortality and complex\nprognosis. To address data limitations in studying MSSA sepsis, we extend\nexisting transfer learning frameworks to accommodate transformation models for\nhigh-dimensional survival data. Specifically, we construct a measurement index\nbased on C-index for intelligently identifying the helpful source datasets, and\nthe target model performance is improved by leveraging information from the\nidentified source datasets via performing the transfer step and debiasing step.\nWe further provide an algorithm to construct confidence intervals for each\ncoefficient component. Another significant development is that statistical\nproperties are rigorously established, including $\\ell_1/\\ell_2$-estimation\nerror bounds of the transfer learning algorithm, detection consistency property\nof the transferable source detection algorithm and asymptotic theories for the\nconfidence interval construction. Extensive simulations and analysis of\nMIMIC-IV sepsis data demonstrate the estimation and prediction accuracy, and\npractical advantages of our approach, providing significant improvements in\nsurvival estimates for MSSA sepsis patients.'</li><li>'Ireland Topsoil Contamination Analysis: A Clustering Approach\nThis study investigates topsoil contamination in Ireland using geochemical\ndata from the Tellus Programme, analyzing 4,278 soil samples across 17,983\nsquare kilometer. The research employs CPF clustering with spatial constraints\nto classify samples into seven different groups, revealing distinct\ncontamination patterns.'</li><li>"Predicting and Mitigating Agricultural Price Volatility Using Climate\n Scenarios and Risk Models\nAgricultural price volatility challenges sustainable finance, planning, and\npolicy, driven by market dynamics and meteorological factors such as\ntemperature and precipitation. In India, the Minimum Support Price (MSP) system\nacts as implicit crop insurance, shielding farmers from price drops without\npremium payments. We analyze the impact of climate on price volatility for\nsoybean (Madhya Pradesh), rice (Assam), and cotton (Gujarat). Using ERA5-Land\nreanalysis data from the Copernicus Climate Change Service, we analyze\nhistorical climate patterns and evaluate two scenarios: SSP2.4.5 (moderate\ncase) and SSP5.8.5 (severe case). Our findings show that weather conditions\nstrongly influence price fluctuations and that integrating meteorological data\ninto volatility models enhances risk-hedging. Using the Exponential Generalized\nAutoregressive Conditional Heteroskedasticity (EGARCH) model, we estimate\nconditional price volatility and identify cross-correlations between weather\nand price volatility movements. Recognizing MSP's equivalence to a European put\noption, we apply the Black-Scholes model to estimate its implicit premium,\nquantifying its fiscal cost. We propose this novel market-based risk-hedging\nmechanism wherein the government purchases insurance equivalent to MSP,\nleveraging Black-Scholes for accurate premium estimation. Our results\nunderscore the importance of meteorological data in agricultural risk modeling,\nsupporting targeted insurance and strengthening resilience in agricultural\nfinance. This climate-informed financial framework enhances risk-sharing,\nstabilizes prices, and informs sustainable agricultural policy under growing\nclimate uncertainty."</li></ul> | | 17 | <ul><li>'Bitcoin: A life in crises\nIn this study, we investigate the BTC price time-series (17 August 2010-27\nJune 2021) and show that the 2017 pricing episode is not unique. We describe at\nleast ten new events, which occurred since 2010-2011 and span more than five\norders of price magnitudes ($US 1-$US 60k). We find that those events have a\nsimilar duration of approx. 50-100 days. Although we are not able to predict\ntimes of a price peak, we however succeed to approximate the BTC price\nevolution using a function that is similar to a Fibonacci sequence. Finally, we\ncomplete a comparison with other types of financial instruments (equities,\ncurrencies, gold) which suggests that BTC may be classified as an illiquid\nasset.'</li><li>'Econometric Model Using Arbitrage Pricing Theory and Quantile Regression\n to Estimate the Risk Factors Driving Crude Oil Returns\nThis work adopts a novel approach to determine the risk and return of crude\noil stocks by employing Arbitrage Pricing Theory (APT) and Quantile Regression\n(QR).The APT identifies the underlying risk factors likely to impact crude oil\nreturns.Subsequently, QR estimates the relationship between the factors and the\nreturns across different quantiles of the distribution. The West Texas\nIntermediate (WTI) crude oil price is used in this study as a benchmark for\ncrude oil prices. WTI price fluctuations can have a significant impact on the\nperformance of crude oil stocks and, subsequently, the global economy.To\ndetermine the proposed models stability, various statistical measures are used\nin this study.The results show that changes in WTI returns can have varying\neffects depending on market conditions and levels of volatility. The study\nhighlights the impact of structural discontinuities on returns, which can be\ncaused by changes in the global economy and the demand for crude oil.The\ninclusion of pandemic, geopolitical, and inflation-related explanatory\nvariables add uniqueness to this study as it considers current global events\nthat can affect crude oil returns.Findings show that the key factors that pose\nmajor risks to returns are industrial production, inflation, the global price\nof energy, the shape of the yield curve, and global economic policy\nuncertainty.This implies that while making investing decisions in WTI futures,\ninvestors should pay particular attention to these elements'</li><li>'Commodities Trading through Deep Policy Gradient Methods\nAlgorithmic trading has gained attention due to its potential for generating\nsuperior returns. This paper investigates the effectiveness of deep\nreinforcement learning (DRL) methods in algorithmic commodities trading. It\nformulates the commodities trading problem as a continuous, discrete-time\nstochastic dynamical system. The proposed system employs a novel\ntime-discretization scheme that adapts to market volatility, enhancing the\nstatistical properties of subsampled financial time series. To optimize\ntransaction-cost- and risk-sensitive trading agents, two policy gradient\nalgorithms, namely actor-based and actor-critic-based approaches, are\nintroduced. These agents utilize CNNs and LSTMs as parametric function\napproximators to map historical price observations to market\npositions.Backtesting on front-month natural gas futures demonstrates that DRL\nmodels increase the Sharpe ratio by $83\\%$ compared to the buy-and-hold\nbaseline. Additionally, the risk profile of the agents can be customized\nthrough a hyperparameter that regulates risk sensitivity in the reward function\nduring the optimization process. The actor-based models outperform the\nactor-critic-based models, while the CNN-based models show a slight performance\nadvantage over the LSTM-based models.'</li></ul> | | 10 | <ul><li>'The Hao-Ng isomorphism theorem for reduced crossed products\nWe prove the Hao-Ng isomorphism for reduced crossed products by locally\ncompact Hausdorff groups. More precisely, for a non-degenerate\n$\\mathrm{C}^*$-correspondence $X$ and a generalized gauge action $G\n\\curvearrowright X$ by a locally compact Hausdorff group $G$, we prove the\ncommutation ${\\mathcal{O}}_{X\\rtimes_rG}\\cong {\\mathcal{O}}_X\\rtimes_rG$ of the\nreduced crossed product with the Cuntz-Pimsner C*-algebra construction.'</li><li>"A p-adaptive polytopal discontinuous Galerkin method for high-order\n approximation of brain electrophysiology\nMultiscale mathematical models have shown great promise in computational\nbrain electrophysiology but are still hindered by high computational costs due\nto fast dynamics and complex brain geometries, requiring very fine\nspatio-temporal resolution. This paper introduces a novel p-adaptive\ndiscontinuous Galerkin method on polytopal grids (PolyDG) coupled with\nCrank-Nicolson time integration to approximate such models efficiently. The\np-adaptive method enhances local accuracy via dynamic, element-wise polynomial\nrefinement/de-refinement guided by a-posteriori error estimators. A novel\nclustering algorithm automatizes the selection of elements for adaptive\nupdates, further improving efficiency. A wide set of numerical tests, including\nepileptic seizure simulations in a sagittal section of a human brain stem,\ndemonstrate the method's ability to reduce computational load while maintaining\nthe accuracy of the numerical solution in capturing the dynamics of multiple\nwavefronts."</li><li>'On $L^α$-flatness of Erdős-Littlewood\'s polynomials\nIt is shown that Erd\\"{o}s--Littlewood\'s polynomials are not $L^\\alpha$-flat\nwhen $\\alpha > 2$ is an even integer (and hence for any $\\alpha \\geq 4$). This\nprovides a partial solution to an old problem posed by Littlewood.\nConsequently, we obtain a positive answer to the analogous Erd\\"{o}s--Newman\nconjecture for polynomials with coefficients $\\pm 1$; that is, there is no\nultraflat sequence of polynomials from the class of Erd\\"{o}s--Littlewood\npolynomials.\n Our proof is short and simple. It relies on the classical lemma for $L^p$\nnorms of the Dirichlet kernel, the Marcinkiewicz--Zygmund interpolation\ninequalities, and the $p$-concentration theorem due to A. Bonami and S.\nR\\\'ev\\\'esz.'</li></ul> | | 14 | <ul><li>'Statistical approach of nuclear multifragmentation with realistic\n nuclear equation of state\nIn this work, Canonical Thermodynamical model for nuclear multifragmentation\nhas been updated with realistic nuclear equation of state. Mass distribution,\nintermediate mass fragment multiplicity as well as isospin sensitive\nobservables have been investigated with semi-microscopic approach of\ndetermining nuclear binding and excitation energies. Production of neutron rich\nisotopes as well as isoscaling and isobaric yield ratio parameters have been\nsignificantly modified due to inclusion of this realistic nuclear equation of\nstate.'</li><li>'Impact of MvdW Equation of State and Neutrino Mass on r and s Process\n Heavy Element Nucleosynthesis in Spiral, Elliptical and Dwarf Galactic\n Environments and Kilonovae Events\nWe present an analysis of heavy element production with massive neutrinos in\ngalaxies of varying types (spiral, elliptical, and dwarf) and kilonovae events\nby incorporating a Multicomponent van der Waals (MvdW) equation of state (EoS)\nfor the opacity functions. This EoS is applied to derive opacities and\ncalculate the yields of isotopes formed in r-process and s-process\nnucleosynthesis, with and without the influence of neutrino masses or\noscillations. We look at both the lanthanide and actinide sequences using the\nMvdW parameters that involve the interaction strength and excluded volume\neffects. Our results reflect the characteristic differences found in r and s\nprocesses in the synthesis and long-term evolution of isotopes from the U, Th,\nand Sr chain across galactic environments. The inclusion of neutrino masses\nenhances the neutron-to-proton ratio, favoring heavier r-process isotopes and\naltering the overall galactic yields by cross section suppression. These\nfindings offer insights into the interplay of nuclear physics and astrophysical\nenvironments, highlighting the sensitivity of nucleosynthetic pathways to EoS\nmodifications and neutrino physics. We compare these results to metallicity\nprofiles of similar models: the Galactic Leaky Box, the Galactic Inflow, and\nthe Galactic Closed Box models and to the kilonova event GW170781.'</li><li>'Effects of magnetic field on the evolution of energy density\n fluctuations\nWe study the effects of a static and uniform magnetic field on the evolution\nof energy density fluctuations present in a medium. By numerically solving the\nrelativistic Boltzmann-Vlasov equation within the relaxation time\napproximation, we explicitly show that magnetic field can affect the\ncharacteristics of energy density fluctuations at the timescale the system\nachieves local thermodynamic equilibrium. A detailed momentum mode analysis of\nfluctuations reveals that magnetic field increases the damping of mode\noscillations, especially for the low momentum modes. This leads to a reduction\nin the ultraviolet (high momentum) cutoff of fluctuations and also slows down\nthe dissipation of relatively low momentum fluctuation modes. We discuss the\nphenomenological implications of our study on various sources of fluctuations\nin relativistic heavy-ion collisions.'</li></ul> | | 16 | <ul><li>'Investigation of Fractional Compartmental Models with Application to\n Amiodarone Drug Diffusion in Pharmacokinetics\nThis paper presents three fractional models formulated from a classical\nPharmacokinetics compartmental system: commensurable, non-commensurable, and\nimplicit non-commensurable models. Their distinguishing characteristics are\nfurther examined comprehensively. Because analytic solutions for such models\nare typically challenging to obtain, we study the application of the Fractional\nFinite Difference Method (FFDM) to simulate approximate solutions. The\ncharacteristic of the non-commensurable model is shown to be incompatible with\nthe concept of mass balance. However, it appeared to outlast fractional\ncalculus theory when simulating anomalous kinetics. We proved this by fitting\nthe proposed fractional and classical models to an experimental data set\n(amiodarone) and estimated the parameters using the least-square approach. The\nclassical model diverged, but the non-commensurable model predicted a fit\ncomparable to the other two fractional models. The fractional models described\nanomalous diffusion better than classical theories. The numerical results\nshowed that the proposed numerical method is equally efficient in solving any\ncomplex compartmental models, as they performed well in simulations for the\nclassic example of the model.'</li><li>'Stochastic trade-offs and the emergence of diversification in E. coli\n evolution experiments\nLaboratory experiments with bacterial colonies, under well-controlled\nconditions often lead to evolutionary diversification, where at least two\necotypes emerge from an initially monomorphic population. Empirical evidence\nsuggests that such "evolutionary branching" occurs stochastically, even under\nfixed and stable conditions. This stochastic nature is characterized by: (i)\noccurrence in a significant fraction, but not all, of experimental settings,\n(ii) emergence at widely varying times, and (iii) variable relative abundances\nof the resulting subpopulations across experiments. Theoretical approaches to\nunderstanding evolutionary branching under these conditions have been\npreviously developed within the (deterministic) framework of "adaptive\ndynamics." Here, we advance the understanding of the stochastic nature of\nevolutionary outcomes by introducing the concept of "stochastic trade-offs" as\nopposed to "hard" ones. The key idea is that the stochasticity of mutations\noccurs in a high-dimensional trait space and this translates into variability\nthat is constrained to a flexible tradeoff curve. By incorporating this\nadditional source of stochasticity, we are able to account for the observed\nempirical variability and make predictions regarding the likelihood of\nevolutionary branching under different conditions. This approach effectively\nbridges the gap between theoretical predictions and experimental observations,\nproviding insights into when and how evolutionary branching is more likely to\noccur in laboratory experiments.'</li><li>"Integrating experimental feedback improves generative models for\n biological sequences\nGenerative probabilistic models have shown promise in designing artificial\nRNA and protein sequences but often suffer from high rates of false positives,\nwhere sequences predicted as functional fail experimental validation. To\naddress this critical limitation, we explore the impact of reintegrating\nexperimental feedback into the model design process. We propose a\nlikelihood-based reintegration scheme, which we test through extensive\ncomputational experiments on both RNA and protein datasets, as well as through\nwet-lab experiments on the self-splicing ribozyme from the group I intron RNA\nfamily where our approach demonstrates particular efficacy. We show that\nintegrating recent experimental data enhances the model's capacity of\ngenerating functional sequences (e.g. from 6.7\\% to 63.7\\% of active designs at\n45 mutations). This feedback-driven approach thus provides a significant\nimprovement in the design of biomolecular sequences by directly tackling the\nfalse-positive challenge."</li></ul> | | 3 | <ul><li>'Endowments, patience types, and uniqueness in two-good HARA utility\n economies\nThis paper establishes a link between endowments, patience types, and the\nparameters of the HARA Bernoulli utility function that ensure equilibrium\nuniqueness in an economy with two goods and two impatience types with additive\nseparable preferences. We provide sufficient conditions that guarantee\nuniqueness of equilibrium for any possible value of $\\gamma$ in the HARA\nutility function\n$\\frac{\\gamma}{1-\\gamma}\\left(b+\\frac{a}{\\gamma}x\\right)^{1-\\gamma}$. The\nanalysis contributes to the literature on uniqueness in pure exchange economies\nwith two-goods and two agent types and extends the result in [4].'</li><li>'A Deep Learning Analysis of Climate Change, Innovation, and Uncertainty\nWe study the implications of model uncertainty in a climate-economics\nframework with three types of capital: "dirty" capital that produces carbon\nemissions when used for production, "clean" capital that generates no emissions\nbut is initially less productive than dirty capital, and knowledge capital that\nincreases with R\\&D investment and leads to technological innovation in green\nsector productivity. To solve our high-dimensional, non-linear model framework\nwe implement a neural-network-based global solution method. We show there are\nfirst-order impacts of model uncertainty on optimal decisions and social\nvaluations in our integrated climate-economic-innovation framework. Accounting\nfor interconnected uncertainty over climate dynamics, economic damages from\nclimate change, and the arrival of a green technological change leads to\nsubstantial adjustments to investment in the different capital types in\nanticipation of technological change and the revelation of climate damage\nseverity.'</li><li>'Exploration of legal implications of air and space travel for\n international and domestic travel and the Environment\nThe rapid growth of air and space travel in recent years has resulted in an\nincreased demand for legal regulation in the aviation and aerospace fields.\nThis paper provides an overview of air and space law, including the topics of\naircraft accident investigations, air traffic control, international borders\nand law, and the regulation of space activities. With the increasing complexity\nof air and space travel, it is important to understand the legal implications\nof these activities. This paper examines the various legal aspects of air and\nspace law, including the roles of national governments, international\norganizations, and private entities. It also provides an overview of the legal\nframeworks that govern these activities and the implications of international\nlaw. Finally, it considers the potential for future developments in the field\nof air and space law. This paper provides a comprehensive overview of the legal\naspects of air and space travel and their implications for international and\ndomestic travel, as well as for international business and other activities in\nthe air and space domains.'</li></ul> | | 5 | <ul><li>'Observational properties of regular black holes in Asymptotic Safety\nWe consider the observational properties of a spherically symmetric, static\nregular black hole within the framework of asymptotic safety (AS) as proposed\nby Bonanno et al. The metric resembles the Schwarzschild solution in the\nclassical limit. The departure from Schwarzschild at small scales is controlled\nby a single free parameter related to the ultraviolet (UV) cutoff of the\ntheory. We investigated null and time-like geodesics around the AS metric,\nincluding circular orbits, photon rings and lensing effects. In particular we\nfocused on the optical properties of thin accretion disks in the equatorial\nplane of the object and compared them with those of accretion disks in the\nSchwarzschild metric. We found that the radiation flux, luminosity, and\nefficiency of the accretion disk increase with the value of the free parameter.\nUsing a spacetime generic open-source relativistic ray-tracing code, we\nsimulate the K$\\alpha$ iron line profiles emitted by the disk and analyze their\ndeviation from that of the Schwarzschild geometry.'</li><li>"Backreaction in $f(R,G)$ Gravitational Waves\nWe present a comprehensive analysis of gravitational wave dynamics in\n$f(R,G)$ modified gravity, where $R$ is the Ricci scalar and $G$ the\nGauss-Bonnet invariant. By developing a scalar-tensor formulation with two\nauxiliary fields, we systematically investigate both the propagation and\nbackreaction of high-frequency gravitational waves in cosmological backgrounds.\nThe linearized field equations reveal how the Gauss-Bonnet term introduces new\ncurvature-dependent couplings between tensor and scalar degrees of freedom,\nleading to modified dispersion relations and distinctive wave propagation\neffects. On de Sitter backgrounds, we obtain exact decoupled equations for the\ntensor and scalar modes, demonstrating how the additional $G$-dependence alters\nboth the effective masses and energy transport mechanisms compared to pure\n$f(R)$ theories.\n Our derivation of the effective energy-momentum tensor extends Isaacson's\napproach to incorporate the novel scalar field contributions, revealing a\ncomplex hierarchy of characteristic length scales ($\\lambda$, $\\ell$, and\n$\\mathcal{L}$) that govern the backreaction dynamics. The resulting formalism\nsuggests potentially observable signatures in both the propagation (phase\nshifts, amplitude modulation) and stochastic background of gravitational waves.\nThese effects could be probed by next-generation detectors, offering new\nconstraints on the $f(R,G)$ coupling parameters. The theoretical framework\ndeveloped here provides a foundation for future studies of gravitational wave\ngeneration in modified gravity scenarios and their role in cosmological\nstructure formation."</li><li>'Stellar isotropic model in the symmetric teleparallel equivalent of\n general relativity theory\nRecently, the theory of symmetric teleparallel equivalent of general\nrelativity (STEGR) has gained much interest in the cosmology and astrophysics\ncommunity. Within this theory, we discuss the method of deriving a stellar\nisotropic model. In this respect, we implement the equations of motion of STEGR\ntheory to a spacetime that is symmetric in a spherical manner, resulting in a\nset of nonlinear differential equations with more unknowns than equations. To\nsolve this issue, we assume a special form of $g_{tt}$, and suppose a null\nvalue of the anisotropy to obtain the form of $g_{rr}$. We then investigate the\npossibility of obtaining an isotropic stellar model consistent with\nobservational data. To test the stability of our model, we apply the adiabatic\nindex and the Tolman-Oppenheimer-Volkoff equation. Furthermore, we examine our\nmodel using different observed values of radii and masses of pulsars, showing\nthat all of them fit in a consistent way.'</li></ul> | | 2 | <ul><li>"LLM-based Interactive Imitation Learning for Robotic Manipulation\nRecent advancements in machine learning provide methods to train autonomous\nagents capable of handling the increasing complexity of sequential\ndecision-making in robotics. Imitation Learning (IL) is a prominent approach,\nwhere agents learn to control robots based on human demonstrations. However, IL\ncommonly suffers from violating the independent and identically distributed\n(i.i.d) assumption in robotic tasks. Interactive Imitation Learning (IIL)\nachieves improved performance by allowing agents to learn from interactive\nfeedback from human teachers. Despite these improvements, both approaches come\nwith significant costs due to the necessity of human involvement. Leveraging\nthe emergent capabilities of Large Language Models (LLMs) in reasoning and\ngenerating human-like responses, we introduce LLM-iTeach -- a novel IIL\nframework that utilizes an LLM as an interactive teacher to enhance agent\nperformance while alleviating the dependence on human resources. Firstly,\nLLM-iTeach uses a hierarchical prompting strategy that guides the LLM in\ngenerating a policy in Python code. Then, with a designed similarity-based\nfeedback mechanism, LLM-iTeach provides corrective and evaluative feedback\ninteractively during the agent's training. We evaluate LLM-iTeach against\nbaseline methods such as Behavior Cloning (BC), an IL method, and CEILing, a\nstate-of-the-art IIL method using a human teacher, on various robotic\nmanipulation tasks. Our results demonstrate that LLM-iTeach surpasses BC in the\nsuccess rate and achieves or even outscores that of CEILing, highlighting the\npotential of LLMs as cost-effective, human-like teachers in interactive\nlearning environments. We further demonstrate the method's potential for\ngeneralization by evaluating it on additional tasks. The code and prompts are\nprovided at: https://github.com/Tubicor/LLM-iTeach."</li><li>"Lifecycle Management of Trustworthy AI Models in 6G Networks: The REASON\n Approach\nArtificial Intelligence (AI) is expected to play a key role in 6G networks\nincluding optimising system management, operation, and evolution. This requires\nsystematic lifecycle management of AI models, ensuring their impact on services\nand stakeholders is continuously monitored. While current 6G initiatives\nintroduce AI, they often fall short in addressing end-to-end intelligence and\ncrucial aspects like trust, transparency, privacy, and verifiability.\nTrustworthy AI is vital, especially for critical infrastructures like 6G. This\npaper introduces the REASON approach for holistically addressing AI's native\nintegration and trustworthiness in future 6G networks. The approach comprises\nAI Orchestration (AIO) for model lifecycle management, Cognition (COG) for\nperformance evaluation and explanation, and AI Monitoring (AIM) for tracking\nand feedback. Digital Twin (DT) technology is leveraged to facilitate real-time\nmonitoring and scenario testing, which are essential for AIO, COG, and AIM. We\ndemonstrate this approach through an AI-enabled xAPP use case, leveraging a DT\nplatform to validate, explain, and deploy trustworthy AI models."</li><li>"AdaptoVision: A Multi-Resolution Image Recognition Model for Robust and\n Scalable Classification\nThis paper introduces AdaptoVision, a novel convolutional neural network\n(CNN) architecture designed to efficiently balance computational complexity and\nclassification accuracy. By leveraging enhanced residual units, depth-wise\nseparable convolutions, and hierarchical skip connections, AdaptoVision\nsignificantly reduces parameter count and computational requirements while\npreserving competitive performance across various benchmark and medical image\ndatasets. Extensive experimentation demonstrates that AdaptoVision achieves\nstate-of-the-art on BreakHis dataset and comparable accuracy levels, notably\n95.3\\% on CIFAR-10 and 85.77\\% on CIFAR-100, without relying on any pretrained\nweights. The model's streamlined architecture and strategic simplifications\npromote effective feature extraction and robust generalization, making it\nparticularly suitable for deployment in real-time and resource-constrained\nenvironments."</li></ul> | | 0 | <ul><li>'Modified gravity realizations of quintom dark energy after DESI DR2\nWe investigate the realization of quintom scenario for dynamical dark energy\nwithin modified gravity theories that can efficiently fit the recent\nobservational datasets. Starting from a general effective field theory\nformulation of dark energy in metric-affine geometry, we derive the background\naction in unitary gauge and we demonstrate how both $f(T)$ and $f(Q)$ gravity\ncan naturally realize quintom behavior through appropriate forms and parameter\nchoices. Additionally, using the Gaussian process reconstruction of the latest\nDESI DR2 BAO data combined with SNe and CMB observations, we extract the\nreconstructed dark-energy equation-of-state parameter, showing that it exhibits\nquintom-type evolution, crossing the phantom divide from below. Moreover,\nthrough detailed parameter estimations and application of information criteria,\nwe compare the model with the quadratic one. Our results show that, due to its\nrich structure, modified gravity stands as one of the main candidates for the\nrealization of the data-favoured dynamical dark energy.'</li><li>'Detection of wave activity within a realistic 3D MHD quiet sun\n simulation\nContext. Tracing wave activity from the photosphere to the corona has\nimportant implications for coronal heating and prediction of the solar wind.\nDespite extensive theory and simulations, the detection of waves in realistic\nMHD simulations still presents a large challenge due to wave interaction, mode\nconversion, and damping mechanisms. Aims. We conducted this study to detect\nlocalised wave activity within a realistic MHD simulation of the solar\natmosphere by the Bifrost code. Methods. We present a new method of detecting\nthe most significant contributions of wave activity within localised areas of\nthe domain, aided by Discrete Fourier Transforms and frequency filtering. We\ncorrelate oscillations in the vertical & horizontal magnetic field, velocities\nparallel & perpendicular to the magnetic field, and pressure to infer the\nnature of the dominant wave modes. Results. Our method captures the most\npowerful frequencies and wavenumbers, as well as providing a new diagnostic for\ndamping processes. We infer the presence of magnetoacoustic waves in the\nboundaries of prominent chromospheric/coronal swirling features. We find these\nwaves are likely damped by viscous heating in the swirl boundaries,\ncontributing to heating in the upper atmosphere. Conclusions. Using the most\nsignificant frequencies decomposition, we highlight that energy can be\ntransported from the lower atmosphere to the upper atmosphere through waves and\nfluctuations along the swirl boundaries. Although further analysis is needed to\nconfirm these findings, our new method provides a path forward to investigate\nwave activity in the solar atmosphere'</li><li>'Is Lorentz invariance violation found?\nLorentz invariance violation (LIV) has long been recognized as an observable\nlow-energy signature of quantum gravity. In spite of a great effort to detect\nLIV effects, so far only lower bounds have been derived. The high energy\nphotons from the gamma ray burst GRB 221009A have been detected by the LHAASO\ncollaboration and one at ${\\cal E} \\simeq 251 \\, \\rm TeV$ by the Carpet\ncollaboration using a partial data set. Very recently, the Carpet collaboration\nhas completed the full data analysis, reporting further support for their\npreviously detected photon now at ${\\cal E} = 300^{+ 43}_{- 38} \\, {\\rm TeV}$,\nwhich manifestly clashes with conventional physics. Taking this result at face\nvalue, we derive the first evidence for LIV and we show that such a detection\ncannot be explained by axion-like particles (ALPs), which allow for the\nobservation of the highest energy photons detected by LHAASO. We also outline a\nscenario in which ALPs and LIV naturally coexist. If confirmed by future\nobservations our finding would represent the first positive result in quantum\ngravity phenomenology.'</li></ul> | | 9 | <ul><li>'Note on $q=2$ paraparticle SYK model\nWe investigate the $q=2$ SYK model with paraparticles (PSYK$_2$), analyzing\nits thermodynamics and spectral form factor (SFF) using random matrix theory.\nThe Hamiltonian is quadratic, with coupling coefficients randomly drawn from\nthe Gaussian Unitary Ensemble (GUE). The model exhibits self-averaging behavior\nand shows a striking transition in SFF dynamics: while the fermionic SYK$_2$\ndisplays a ramp behavior $\\mathcal{K}(t) \\sim e^{C_0 t}$ with $C_0 \\sim \\ln N$,\nthe paraparticle cases exhibit $C_0 \\sim \\mathcal{O}(1)$. These findings offer\nnew insights into quantum systems with exotic statistics.'</li><li>'Free field realization of the quantum toroidal algebra of\n $\\mathfrak{gl}_1$ with general levels\nWe present a unified free field realization of representations for the\nquantum toroidal algebra of $\\mathfrak{gl}_1$ with arbitrary levels,\nconstructed using six free boson fields. This realization arises from a\nspecialized factorization of the structure function within the defining\nrelations of the quantum toroidal algebra of $\\mathfrak{gl}_1$. Utilizing this\nfree field realization, we further develop intertwining operators for the\nalgebra of $\\mathfrak{gl}_1$.'</li><li>'AdS3 axion wormholes as stable contributions to the Euclidean\n gravitational path integral\nRecent work has demonstrated that Euclidean Giddings-Strominger axion\nwormholes are stable in asymptotically flat 4D Minkowski spacetime, suggesting\nthat they should, at least naively, be included as contributions in the quantum\ngravitational path integral. Such inclusion appears to lead to known wormhole\nparadoxes, such as the factorization problem. In this paper, we generalize\nthese results to AdS3 spacetime, where the axion is equivalent to a U(1) gauge\nfield. We explicitly construct the classical wormhole solutions, show their\nregularity and stability, and compute their actions for arbitrary ratios of the\nwormhole mouth radius to the AdS radius and across various topologies. Finally,\nWe discuss potential implications of these findings for the 3D gravitational\npath integral.'</li></ul> | | 13 | <ul><li>"Proton Charge Radius from Lepton Scattering\nProtons are bound states of the strong interaction governed by Quantum\nChromodynamics (QCD). Its charge radius ($r_{E}^{p}$) is an important quantity\nas it characterizes the spatial distribution of the proton's charge, which is\ncarried by the quarks. On the other hand, the proton charge radius is an\nessential physical input for the bound-state Quantum Electrodynamic (QED)\ncalculations for the hydrogen atomic energy levels. Nevertheless, the large\ndiscrepancy between $r_{E}^{p}$ measurements from muonic hydrogen spectroscopy,\nand those from $ep$ elastic scattering and ordinary hydrogen spectroscopy, have\nbeen puzzling physicists for over a decade. Tremendous efforts, in both\ntheoretical and experimental sides, have been dedicated to providing various\ninsights into this puzzle, yet certain issues still remain unresolved,\nparticularly in the field of lepton scatterings. This review will focus on\n$r_{E}^{p}$ measurements using lepton scatterings, the recent theoretical and\nexperimental developments in this field, as well as future experiments using\nthis technique."</li><li>'First observation of the $β$3$α$p decay of $^{13}\\mathrm{O}$\n via $β$-delayed charged-particle spectroscopy\nBackground: The $\\beta$-delayed proton-decay of $^{13}\\mathrm{O}$ has\npreviously been studied, but the direct observation of $\\beta$-delayed\n$\\alpha$+$\\alpha$+$\\alpha$+p decay has not been reported. Purpose: Observing\nrare 3$\\alpha$+p events from the decay of excited states in\n$^{13}\\mathrm{N}^{\\star}$ allows for a sensitive probe of exotic\nhighly-clustered configurations in $^{13}$N. Method: To measure the low-energy\nproducts following $\\beta$-delayed 3$\\alpha$p-decay, the TexAT Time Projection\nChamber was employed using the one-at-a-time $\\beta$-delayed charged-particle\nspectroscopy technique at the Cyclotron Institute, Texas A&M University.\nResults: A total of $1.9 \\times 10^{5}$ $^{13}\\mathrm{O}$ implantations were\nmade inside the TexAT Time Projection Chamber. 149 3$\\alpha$+p events were\nobserved yielding a $\\beta$-delayed 3$\\alpha+p$ branching ratio of 0.078(6)%.\nConclusion: Four previously unknown $\\alpha$-decaying states were observed, one\nwith a strong $^{9}\\mathrm{B(g.s)}+\\alpha$ characteristic at 11.3 MeV, one with\na $^{9}\\mathrm{B}(\\frac{1}{2}^{+})+\\alpha$ nature at 12.4 MeV, and another two\nthat are dominated by $^{9}\\mathrm{B}({\\frac{5}{2}}^{+})+\\alpha$ at 13.1 and\n13.7 MeV. Population of the $\\frac{1}{2}^{+}$ state in $^{9}\\mathrm{B}$ has\nbeen unambiguously seen, cementing the predicted existence of the mirror-state\nbased on the states observed in $^{9}\\mathrm{Be}$.'</li><li>"Measuring short-range correlations and quasi-elastic cross sections in\n A(e,e') at x>1 and modest Q$^2$\nWe present results from the Jefferson Lab E08-014 experiment, investigating\nshort-range correlations (SRC) through measurements of absolute inclusive\nquasi-elastic cross sections and their ratios. This study utilized 3.356 GeV\nelectrons scattered off targets including $^2$H, $^3$He, $^4$He, $^{12}$C,\n$^{40}$Ca, and $^{48}$Ca, at modest momentum transfers ($1.3 < Q^2 \\leq 2$\nGeV$^2$). Kinematics were selected to enhance the cross-section contribution\nfrom high-momentum nucleons originating from the strongly interacting,\nshort-distance components of two-nucleon SRCs (2N-SRCs), known to exhibit a\nuniversal structure across both light and heavy nuclei.We analyzed the A/$^2$H\nratio within the region dominated by 2N-SRCs to characterize the nuclear\ndependence of SRC contributions across various nuclei. Additionally, the\nA/$^3$He ratio was examined at kinematics sensitive to nucleons with even\nhigher momentum, aiming to identify signals indicative of three-nucleon SRCs\n(3N-SRCs). The traditional analysis method in the expected 3N-SRC region ($x >\n2$) did not yield a clear plateau; instead, the data diverged from the\npredicted 3N-SRC behavior as momentum transfer increased. However, when\nanalyzed in terms of the struck nucleon's light-cone momentum, the data\nexhibited the opposite trend, progressively approaching the predicted 3N-SRC\nplateau. These observations suggest that future measurements at higher energies\nmay facilitate a definitive isolation and identification of 3N-SRCs."</li></ul> | | 1 | <ul><li>'Effect of pressure on the transport properties and thermoelectric\n performance of Dirac semimetal ZrTe5\nIn this study, we have investigated and compared the effect of hydrostatic\npressure up to ~20 kbar on the transport properties of ZrTe5 single crystals\ngrown by chemical vapor transport (CVT) and flux methods. With the application\nof pressure, the electrical resistivity Rho(T) and thermopower S(T) of both\ncrystals were found to increase in the whole temperature range unlike the other\nknown thermoelectric materials, such as Bi2Te3, SnSe etc. This observation is\nsupported by the complementary first-principles band structure calculation as\nthe application of pressure widens the direct bandgap at {\\Gamma} point.\nMoreover, the analysis of the pressure dependent magneto-transport and\nShubnikov de-Hass oscillation results revealed an increase in carrier\nconcentration and effective mass along with the reduction of mobility as\npressure rises. Furthermore, with the application of pressure, the flux-grown\nZrTe5 crystals display a transition from unipolar to bipolar charge transport\nas evidenced by the emergence of resistivity peak at T* under high pressure,\nunlike the CVT-grown ZrTe5 crystals where the bipolar charge transport near its\ncharacteristic resistivity peak (Tp) remains unaffected.'</li><li>'Signatures of Candidate States of $ν=12/5$ in Shot Noise\nFractional quantum Hall (FQH) states are highly sought after because of their\nability to host non-abelian anyons, whose braiding statistics make them\nexcellent candidates for qubits in topological quantum computing. Multiple\ntheoretical studies on the $\\nu=\\frac{12}{5}$ FQH state predict various\nquasi-particle states hosted by the $\\frac{12}{5}$ plateau, which include\n$\\mathbb Z_3$ parafermions and Majorana modes. In this work, we provide a\nsystematic protocol to distinguish among four possible candidate wavefunctions\nof the $\\frac{12}{5}$ plateau using zero-frequency short noise experiments on a\nfilter-geometry. Qualitative comparisons of Fano-Factors provide a robust way\nto predict the candidate state across both the full and partial thermal\nequilibration regimes without prior knowledge of the experimental information,\nlike thermal equilibration length, to allow for more realistic experiments.'</li><li>'Performances in solving the Bethe-Salpeter equation with the Yambo code\nIn this work, we analyze the performances of two different strategies in\nsolving the structured eigenvalue problem deriving from the Bethe-Salpeter\nequation (BSE) in condensed matter physics. The first strategy employs direct\ndiagonalization, while the second is based on an iterative solver. The BSE\nmatrix is constructed with the Yambo code, and the two strategies are\nimplemented by interfacing Yambo with the ScaLAPACK and ELPA libraries for\ndirect diagonalization, and with the SLEPc library for the iterative approach.\nWe consider both the hermitian (Tamm-Dancoff approximation) and\npseudo-hermitian forms, addressing dense matrices of three different sizes. A\ndescription of the implementation is also provided, with details for the\npseudo-hermitian case. Timing and memory utilization are analyzed on both CPU\nand GPU clusters. The CPU simulations are performed on a local cluster in Rome,\nwhile the GPU simulations are performed on the Leonardo HPC cluster of CINECA.\nOur results demonstrate that it is now feasible to handle dense BSE matrices of\nthe order 10$^5$.'</li></ul> | | 4 | <ul><li>'Translation of Fetal Brain Ultrasound Images into Pseudo-MRI Images\n using Artificial Intelligence\nUltrasound is a widely accessible and cost-effective medical imaging tool\ncommonly used for prenatal evaluation of the fetal brain. However, it has\nlimitations, particularly in the third trimester, where the complexity of the\nfetal brain requires high image quality for extracting quantitative data. In\ncontrast, magnetic resonance imaging (MRI) offers superior image quality and\ntissue differentiation but is less available, expensive, and requires\ntime-consuming acquisition. Thus, transforming ultrasonic images into an\nMRI-mimicking display may be advantageous and allow better tissue anatomy\npresentation. To address this goal, we have examined the use of artificial\nintelligence, implementing a diffusion model renowned for generating\nhigh-quality images. The proposed method, termed "Dual Diffusion Imposed\nCorrelation" (DDIC), leverages a diffusion-based translation methodology,\nassuming a shared latent space between ultrasound and MRI domains. Model\ntraining was obtained utilizing the "HC18" dataset for ultrasound and the "CRL\nfetal brain atlas" along with the "FeTA " datasets for MRI. The generated\npseudo-MRI images provide notable improvements in visual discrimination of\nbrain tissue, especially in the lateral ventricles and the Sylvian fissure,\ncharacterized by enhanced contrast clarity. Improvement was demonstrated in\nMutual information, Peak signal-to-noise ratio, Fr\\\'echet Inception Distance,\nand Contrast-to-noise ratio. Findings from these evaluations indicate\nstatistically significant superior performance of the DDIC compared to other\ntranslation methodologies. In addition, a Medical Opinion Test was obtained\nfrom 5 gynecologists. The results demonstrated display improvement in 81% of\nthe tested images. In conclusion, the presented pseudo-MRI images hold the\npotential for streamlining diagnosis and enhancing clinical outcomes through\nimproved representation.'</li><li>'On Geometric Shaping for 400 Gbps IM-DD Links with Laser Intensity Noise\nWe propose geometric shaping for IM-DD links dominated by relative intensity\nnoise (RIN). For 400 Gbps links, our geometrically-shaped constellations result\nin error probability improvements that relaxes the RIN laser design by 3 dB.'</li><li>'System Level Synthesis for Affine Control Policies: Model Based and\n Data-Driven Settings\nThere is an increasing need for effective control of systems with complex\ndynamics, particularly through data-driven approaches. System Level Synthesis\n(SLS) has emerged as a powerful framework that facilitates the control of\nlarge-scale systems while accounting for model uncertainties. SLS approaches\nare currently limited to linear systems and time-varying linear control\npolicies, thus limiting the class of achievable control strategies. We\nintroduce a novel closed-loop parameterization for time-varying affine control\npolicies, extending the SLS framework to a broader class of systems and\npolicies. We show that the closed-loop behavior under affine policies can be\nequivalently characterized using past system trajectories, enabling a fully\ndata-driven formulation. This parameterization seamlessly integrates affine\npolicies into optimal control problems, allowing for a closed-loop formulation\nof general Model Predictive Control (MPC) problems. To the best of our\nknowledge, this is the first work to extend SLS to affine policies in both\nmodel-based and data-driven settings, enabling an equivalent formulation of MPC\nproblems using closed-loop maps. We validate our approach through numerical\nexperiments, demonstrating that our model-based and data-driven affine SLS\nformulations achieve performance on par with traditional model-based MPC.'</li></ul> | | 6 | <ul><li>'Jet energy calibration with deep learning as a Kubeflow pipeline\nPrecise measurements of the energy of jets emerging from particle collisions\nat the LHC are essential for a vast majority of physics searches at the CMS\nexperiment. In this study, we leverage well-established deep learning models\nfor point clouds and CMS open data to improve the energy calibration of\nparticle jets. To enable production-ready machine learning based jet energy\ncalibration an end-to-end pipeline is built on the Kubeflow cloud platform. The\npipeline allowed us to scale up our hyperparameter tuning experiments on cloud\nresources, and serve optimal models as REST endpoints. We present the results\nof the parameter tuning process and analyze the performance of the served\nmodels in terms of inference time and overhead, providing insights for future\nwork in this direction. The study also demonstrates improvements in both flavor\ndependence and resolution of the energy response when compared to the standard\njet energy corrections baseline.'</li><li>"Comparing and improving hybrid deep learning algorithms for identifying\n and locating primary vertices\nUsing deep neural networks to identify and locate proton-proton collision\npoints, or primary vertices, in LHCb has been studied for several years.\nPreliminary results demonstrated the ability for a hybrid deep learning\nalgorithm to achieve similar or better physics performances compared to\nstandard heuristic approaches. The previously studied architectures relied\ndirectly on hand-calculated Kernel Density Estimators (KDEs) as input features.\nCalculating these KDEs was slow, making use of the DNN inference engines in the\nexperiment's real-time analysis (trigger) system problematic. Here we present\nrecent results from a high-performance hybrid deep learning algorithm that uses\ntrack parameters as input features rather than KDEs, opening the path to\ndeployment in the real-time trigger system."</li><li>'The ECFA Roadmap Process for Particle Identification and Photon Detector\n R&D\nThe Detector R&D Roadmap for European Particle Physics was published in\nFebruary 2022. The outcome of the Roadmap process relating to particle\nidentification and photon detectors is summarised.'</li></ul> | ## Evaluation ### Metrics | Label | F1 | |:--------|:-------| | **all** | 0.5294 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("gpham/all-mpnet-base-v2-setfit-arxiv") # Run inference preds = model("Revisiting the physical properties of (LaS)1+d(NbS2) misfit-layered compounds Electrical transport in polycrystalline and single-crystalline (LaS)1+d(NbS2) misfit-layered compounds was measured. Polycrystalline samples were synthesized using S raw materials of different purities (2N or 6N), and single-crystalline samples were grown using two types of transport agents (2NH4Cl+PbCl2 or NH4Cl) via the chemical vapor transport method. The temperature dependence on resistivity dropped at 1.3-2.0 K for some of the samples, which might be affected by the unknown impurity. (LaS)1+d(NbS2) misfit-layered compounds for the main phase of those obtained samples exhibited no superconductivity above 0.2 K by the resistivity measurement.") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:-------|:----| | Word count | 32 | 146.75 | 284 | | Label | Training Sample Count | |:------|:----------------------| | 0 | 8 | | 1 | 8 | | 2 | 8 | | 3 | 8 | | 4 | 8 | | 5 | 8 | | 6 | 8 | | 7 | 8 | | 8 | 8 | | 9 | 8 | | 10 | 8 | | 11 | 8 | | 12 | 8 | | 13 | 8 | | 14 | 8 | | 15 | 8 | | 16 | 8 | | 17 | 8 | | 18 | 8 | | 19 | 8 | ### Training Hyperparameters - batch_size: (16, 16) - num_epochs: (1, 1) - max_steps: -1 - sampling_strategy: oversampling - num_iterations: 20 - body_learning_rate: (2e-05, 2e-05) - head_learning_rate: 2e-05 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - l2_weight: 0.01 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:------:|:----:|:-------------:|:---------------:| | 0.0025 | 1 | 0.1259 | - | | 0.125 | 50 | 0.077 | - | | 0.25 | 100 | 0.0514 | - | | 0.375 | 150 | 0.0361 | - | | 0.5 | 200 | 0.0264 | - | | 0.625 | 250 | 0.0226 | - | | 0.75 | 300 | 0.0196 | - | | 0.875 | 350 | 0.0139 | - | | 1.0 | 400 | 0.0138 | - | | 0.05 | 1 | 0.0111 | - | | 0.125 | 50 | 0.0114 | - | | 0.25 | 100 | 0.0069 | - | | 0.375 | 150 | 0.0069 | - | | 0.5 | 200 | 0.0052 | - | | 0.625 | 250 | 0.0029 | - | | 0.75 | 300 | 0.0026 | - | | 0.875 | 350 | 0.0013 | - | | 1.0 | 400 | 0.0013 | - | ### Framework Versions - Python: 3.11.12 - SetFit: 1.1.2 - Sentence Transformers: 4.1.0 - Transformers: 4.48.3 - PyTorch: 2.7.0+cu126 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
sdfsdsssFBoss/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-swift_jumping_cheetah
sdfsdsssFBoss
2025-05-04T07:12:28Z
2
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am swift jumping cheetah", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-23T07:17:05Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-swift_jumping_cheetah tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am swift jumping cheetah - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-swift_jumping_cheetah This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="sdfsdsssFBoss/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-swift_jumping_cheetah", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Selma01/Wilkinson
Selma01
2025-05-04T07:10:01Z
0
0
null
[ "license:artistic-2.0", "region:us" ]
null
2025-05-04T07:10:01Z
--- license: artistic-2.0 ---
DevQuasar/kyutai.helium-1-preview-2b-GGUF
DevQuasar
2025-05-04T07:09:00Z
0
0
null
[ "text-generation", "base_model:kyutai/helium-1-preview-2b", "base_model:finetune:kyutai/helium-1-preview-2b", "region:us" ]
text-generation
2025-05-04T07:08:31Z
--- base_model: - kyutai/helium-1-preview-2b pipeline_tag: text-generation --- [<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com) Quantized version of: [kyutai/helium-1-preview-2b](https://huggingface.co/kyutai/helium-1-preview-2b) 'Make knowledge free for everyone' <p align="center"> Made with <br> <a href="https://www.civo.com/" target="_blank"> <img src="https://www.civo.com/assets/public/brand-assets/civo-logo-colour-60cc1622dedf346f7afde1fff760523f731b0aac106a5465af98ff4073114b74.svg" width="100"/> </a> </p> <a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
ysn-rfd/gemma3_fibonacci
ysn-rfd
2025-05-04T07:03:44Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "gemma3", "trl", "en", "base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit", "base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-05-04T07:03:30Z
--- base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - gemma3 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** ysn-rfd - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
mradermacher/Protobase-SCE1-LLaMa-70B-i1-GGUF
mradermacher
2025-05-04T07:03:15Z
19
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:TareksGraveyard/Protobase-SCE1-LLaMa-70B", "base_model:quantized:TareksGraveyard/Protobase-SCE1-LLaMa-70B", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-05-03T16:36:48Z
--- base_model: TareksGraveyard/Protobase-SCE1-LLaMa-70B language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/TareksGraveyard/Protobase-SCE1-LLaMa-70B <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Protobase-SCE1-LLaMa-70B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Protobase-SCE1-LLaMa-70B-i1-GGUF/resolve/main/Protobase-SCE1-LLaMa-70B.i1-IQ1_S.gguf) | i1-IQ1_S | 15.4 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Protobase-SCE1-LLaMa-70B-i1-GGUF/resolve/main/Protobase-SCE1-LLaMa-70B.i1-IQ1_M.gguf) | i1-IQ1_M | 16.9 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Protobase-SCE1-LLaMa-70B-i1-GGUF/resolve/main/Protobase-SCE1-LLaMa-70B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 19.2 | | | [GGUF](https://huggingface.co/mradermacher/Protobase-SCE1-LLaMa-70B-i1-GGUF/resolve/main/Protobase-SCE1-LLaMa-70B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 21.2 | | | [GGUF](https://huggingface.co/mradermacher/Protobase-SCE1-LLaMa-70B-i1-GGUF/resolve/main/Protobase-SCE1-LLaMa-70B.i1-IQ2_S.gguf) | i1-IQ2_S | 22.3 | | | [GGUF](https://huggingface.co/mradermacher/Protobase-SCE1-LLaMa-70B-i1-GGUF/resolve/main/Protobase-SCE1-LLaMa-70B.i1-IQ2_M.gguf) | i1-IQ2_M | 24.2 | | | [GGUF](https://huggingface.co/mradermacher/Protobase-SCE1-LLaMa-70B-i1-GGUF/resolve/main/Protobase-SCE1-LLaMa-70B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 24.6 | very low quality | | [GGUF](https://huggingface.co/mradermacher/Protobase-SCE1-LLaMa-70B-i1-GGUF/resolve/main/Protobase-SCE1-LLaMa-70B.i1-Q2_K.gguf) | i1-Q2_K | 26.5 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Protobase-SCE1-LLaMa-70B-i1-GGUF/resolve/main/Protobase-SCE1-LLaMa-70B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Protobase-SCE1-LLaMa-70B-i1-GGUF/resolve/main/Protobase-SCE1-LLaMa-70B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 29.4 | | | [GGUF](https://huggingface.co/mradermacher/Protobase-SCE1-LLaMa-70B-i1-GGUF/resolve/main/Protobase-SCE1-LLaMa-70B.i1-IQ3_S.gguf) | i1-IQ3_S | 31.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Protobase-SCE1-LLaMa-70B-i1-GGUF/resolve/main/Protobase-SCE1-LLaMa-70B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 31.0 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Protobase-SCE1-LLaMa-70B-i1-GGUF/resolve/main/Protobase-SCE1-LLaMa-70B.i1-IQ3_M.gguf) | i1-IQ3_M | 32.0 | | | [GGUF](https://huggingface.co/mradermacher/Protobase-SCE1-LLaMa-70B-i1-GGUF/resolve/main/Protobase-SCE1-LLaMa-70B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 34.4 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Protobase-SCE1-LLaMa-70B-i1-GGUF/resolve/main/Protobase-SCE1-LLaMa-70B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 37.2 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Protobase-SCE1-LLaMa-70B-i1-GGUF/resolve/main/Protobase-SCE1-LLaMa-70B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 38.0 | | | [GGUF](https://huggingface.co/mradermacher/Protobase-SCE1-LLaMa-70B-i1-GGUF/resolve/main/Protobase-SCE1-LLaMa-70B.i1-Q4_0.gguf) | i1-Q4_0 | 40.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Protobase-SCE1-LLaMa-70B-i1-GGUF/resolve/main/Protobase-SCE1-LLaMa-70B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 40.4 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Protobase-SCE1-LLaMa-70B-i1-GGUF/resolve/main/Protobase-SCE1-LLaMa-70B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 42.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Protobase-SCE1-LLaMa-70B-i1-GGUF/resolve/main/Protobase-SCE1-LLaMa-70B.i1-Q4_1.gguf) | i1-Q4_1 | 44.4 | | | [GGUF](https://huggingface.co/mradermacher/Protobase-SCE1-LLaMa-70B-i1-GGUF/resolve/main/Protobase-SCE1-LLaMa-70B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 48.8 | | | [GGUF](https://huggingface.co/mradermacher/Protobase-SCE1-LLaMa-70B-i1-GGUF/resolve/main/Protobase-SCE1-LLaMa-70B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 50.0 | | | [PART 1](https://huggingface.co/mradermacher/Protobase-SCE1-LLaMa-70B-i1-GGUF/resolve/main/Protobase-SCE1-LLaMa-70B.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Protobase-SCE1-LLaMa-70B-i1-GGUF/resolve/main/Protobase-SCE1-LLaMa-70B.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 58.0 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
davidheineman/colbert-acl
davidheineman
2025-05-04T07:01:04Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2024-04-14T14:51:12Z
--- license: apache-2.0 --- This is a dataset of 100K+ ML & NLP abstracts with a pre-built indexed using [colbert-ir/colbertv2.0](https://huggingface.co/colbert-ir/colbertv2.0). A deployed version of this index is at [github.com/davidheineman/acl-search](https://github.com/davidheineman/acl-search).
mradermacher/Unnamed-Exp-QWQ-32b-v0.3.5-i1-GGUF
mradermacher
2025-05-04T07:00:11Z
19
0
transformers
[ "transformers", "gguf", "en", "base_model:TheSkullery/Unnamed-Exp-QWQ-32b-v0.3.5", "base_model:quantized:TheSkullery/Unnamed-Exp-QWQ-32b-v0.3.5", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-05-04T02:42:06Z
--- base_model: TheSkullery/Unnamed-Exp-QWQ-32b-v0.3.5 language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/TheSkullery/Unnamed-Exp-QWQ-32b-v0.3.5 <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Unnamed-Exp-QWQ-32b-v0.3.5-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Unnamed-Exp-QWQ-32b-v0.3.5-i1-GGUF/resolve/main/Unnamed-Exp-QWQ-32b-v0.3.5.i1-IQ1_S.gguf) | i1-IQ1_S | 7.4 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Unnamed-Exp-QWQ-32b-v0.3.5-i1-GGUF/resolve/main/Unnamed-Exp-QWQ-32b-v0.3.5.i1-IQ1_M.gguf) | i1-IQ1_M | 8.0 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Unnamed-Exp-QWQ-32b-v0.3.5-i1-GGUF/resolve/main/Unnamed-Exp-QWQ-32b-v0.3.5.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.1 | | | [GGUF](https://huggingface.co/mradermacher/Unnamed-Exp-QWQ-32b-v0.3.5-i1-GGUF/resolve/main/Unnamed-Exp-QWQ-32b-v0.3.5.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.1 | | | [GGUF](https://huggingface.co/mradermacher/Unnamed-Exp-QWQ-32b-v0.3.5-i1-GGUF/resolve/main/Unnamed-Exp-QWQ-32b-v0.3.5.i1-IQ2_S.gguf) | i1-IQ2_S | 10.5 | | | [GGUF](https://huggingface.co/mradermacher/Unnamed-Exp-QWQ-32b-v0.3.5-i1-GGUF/resolve/main/Unnamed-Exp-QWQ-32b-v0.3.5.i1-IQ2_M.gguf) | i1-IQ2_M | 11.4 | | | [GGUF](https://huggingface.co/mradermacher/Unnamed-Exp-QWQ-32b-v0.3.5-i1-GGUF/resolve/main/Unnamed-Exp-QWQ-32b-v0.3.5.i1-Q2_K_S.gguf) | i1-Q2_K_S | 11.6 | very low quality | | [GGUF](https://huggingface.co/mradermacher/Unnamed-Exp-QWQ-32b-v0.3.5-i1-GGUF/resolve/main/Unnamed-Exp-QWQ-32b-v0.3.5.i1-Q2_K.gguf) | i1-Q2_K | 12.4 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Unnamed-Exp-QWQ-32b-v0.3.5-i1-GGUF/resolve/main/Unnamed-Exp-QWQ-32b-v0.3.5.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 12.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Unnamed-Exp-QWQ-32b-v0.3.5-i1-GGUF/resolve/main/Unnamed-Exp-QWQ-32b-v0.3.5.i1-IQ3_XS.gguf) | i1-IQ3_XS | 13.8 | | | [GGUF](https://huggingface.co/mradermacher/Unnamed-Exp-QWQ-32b-v0.3.5-i1-GGUF/resolve/main/Unnamed-Exp-QWQ-32b-v0.3.5.i1-Q3_K_S.gguf) | i1-Q3_K_S | 14.5 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Unnamed-Exp-QWQ-32b-v0.3.5-i1-GGUF/resolve/main/Unnamed-Exp-QWQ-32b-v0.3.5.i1-IQ3_S.gguf) | i1-IQ3_S | 14.5 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Unnamed-Exp-QWQ-32b-v0.3.5-i1-GGUF/resolve/main/Unnamed-Exp-QWQ-32b-v0.3.5.i1-IQ3_M.gguf) | i1-IQ3_M | 14.9 | | | [GGUF](https://huggingface.co/mradermacher/Unnamed-Exp-QWQ-32b-v0.3.5-i1-GGUF/resolve/main/Unnamed-Exp-QWQ-32b-v0.3.5.i1-Q3_K_M.gguf) | i1-Q3_K_M | 16.0 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Unnamed-Exp-QWQ-32b-v0.3.5-i1-GGUF/resolve/main/Unnamed-Exp-QWQ-32b-v0.3.5.i1-Q3_K_L.gguf) | i1-Q3_K_L | 17.3 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Unnamed-Exp-QWQ-32b-v0.3.5-i1-GGUF/resolve/main/Unnamed-Exp-QWQ-32b-v0.3.5.i1-IQ4_XS.gguf) | i1-IQ4_XS | 17.8 | | | [GGUF](https://huggingface.co/mradermacher/Unnamed-Exp-QWQ-32b-v0.3.5-i1-GGUF/resolve/main/Unnamed-Exp-QWQ-32b-v0.3.5.i1-Q4_0.gguf) | i1-Q4_0 | 18.8 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Unnamed-Exp-QWQ-32b-v0.3.5-i1-GGUF/resolve/main/Unnamed-Exp-QWQ-32b-v0.3.5.i1-Q4_K_S.gguf) | i1-Q4_K_S | 18.9 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Unnamed-Exp-QWQ-32b-v0.3.5-i1-GGUF/resolve/main/Unnamed-Exp-QWQ-32b-v0.3.5.i1-Q4_K_M.gguf) | i1-Q4_K_M | 19.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Unnamed-Exp-QWQ-32b-v0.3.5-i1-GGUF/resolve/main/Unnamed-Exp-QWQ-32b-v0.3.5.i1-Q4_1.gguf) | i1-Q4_1 | 20.7 | | | [GGUF](https://huggingface.co/mradermacher/Unnamed-Exp-QWQ-32b-v0.3.5-i1-GGUF/resolve/main/Unnamed-Exp-QWQ-32b-v0.3.5.i1-Q5_K_S.gguf) | i1-Q5_K_S | 22.7 | | | [GGUF](https://huggingface.co/mradermacher/Unnamed-Exp-QWQ-32b-v0.3.5-i1-GGUF/resolve/main/Unnamed-Exp-QWQ-32b-v0.3.5.i1-Q5_K_M.gguf) | i1-Q5_K_M | 23.4 | | | [GGUF](https://huggingface.co/mradermacher/Unnamed-Exp-QWQ-32b-v0.3.5-i1-GGUF/resolve/main/Unnamed-Exp-QWQ-32b-v0.3.5.i1-Q6_K.gguf) | i1-Q6_K | 27.0 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
moyixiao/llama3_droa128_merge
moyixiao
2025-05-04T06:59:08Z
3
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "llama-factory", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T06:57:41Z
--- library_name: transformers tags: - llama-factory --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
DevQuasar/kyutai.helium-1-2b-pop-GGUF
DevQuasar
2025-05-04T06:42:13Z
20
0
null
[ "gguf", "text-generation", "base_model:kyutai/helium-1-2b-pop", "base_model:quantized:kyutai/helium-1-2b-pop", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T06:28:44Z
--- base_model: - kyutai/helium-1-2b-pop pipeline_tag: text-generation --- [<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com) Quantized version of: [kyutai/helium-1-2b-pop](https://huggingface.co/kyutai/helium-1-2b-pop) 'Make knowledge free for everyone' <p align="center"> Made with <br> <a href="https://www.civo.com/" target="_blank"> <img src="https://www.civo.com/assets/public/brand-assets/civo-logo-colour-60cc1622dedf346f7afde1fff760523f731b0aac106a5465af98ff4073114b74.svg" width="100"/> </a> </p> <a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
ASethi04/meta-llama-Llama-3.1-8B-hellaswag-first-lora-4-0.001
ASethi04
2025-05-04T06:38:52Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:meta-llama/Llama-3.1-8B", "base_model:finetune:meta-llama/Llama-3.1-8B", "endpoints_compatible", "region:us" ]
null
2025-05-04T02:49:17Z
--- base_model: meta-llama/Llama-3.1-8B library_name: transformers model_name: meta-llama-Llama-3.1-8B-hellaswag-first-lora-4-0.001 tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for meta-llama-Llama-3.1-8B-hellaswag-first-lora-4-0.001 This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="ASethi04/meta-llama-Llama-3.1-8B-hellaswag-first-lora-4-0.001", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/torchql-org/huggingface/runs/3r6mawt9) This model was trained with SFT. ### Framework versions - TRL: 0.16.1 - Transformers: 4.51.2 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
DefiChuks/Phase
DefiChuks
2025-05-04T06:38:22Z
0
1
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-04T06:38:22Z
--- license: apache-2.0 ---
berenbaum/model
berenbaum
2025-05-04T06:36:17Z
0
0
transformers
[ "transformers", "text-generation-inference", "unsloth", "qwen3", "en", "base_model:unsloth/Qwen3-14B", "base_model:finetune:unsloth/Qwen3-14B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-05-04T06:36:16Z
--- base_model: unsloth/Qwen3-14B tags: - text-generation-inference - transformers - unsloth - qwen3 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** berenbaum - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen3-14B This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Membersuger/Euro_37
Membersuger
2025-05-04T06:34:26Z
1
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T04:43:11Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
loc1105/qwen2-capydata-captioning
loc1105
2025-05-04T06:32:11Z
3
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "region:us" ]
null
2025-05-04T06:31:50Z
--- base_model: unsloth/qwen2-vl-7b-instruct-unsloth-bnb-4bit library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.2
0xtinuviel/Qwen2.5-72B-Instruct-bnb-4bit-Gensyn-Swarm-subtle_rugged_snail
0xtinuviel
2025-05-04T06:31:09Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am subtle rugged snail", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-72B-Instruct-bnb-4bit", "base_model:finetune:Gensyn/Qwen2.5-72B-Instruct-bnb-4bit", "endpoints_compatible", "region:us" ]
null
2025-05-02T00:56:10Z
--- base_model: Gensyn/Qwen2.5-72B-Instruct-bnb-4bit library_name: transformers model_name: Qwen2.5-72B-Instruct-bnb-4bit-Gensyn-Swarm-subtle_rugged_snail tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am subtle rugged snail - unsloth - trl licence: license --- # Model Card for Qwen2.5-72B-Instruct-bnb-4bit-Gensyn-Swarm-subtle_rugged_snail This model is a fine-tuned version of [Gensyn/Qwen2.5-72B-Instruct-bnb-4bit](https://huggingface.co/Gensyn/Qwen2.5-72B-Instruct-bnb-4bit). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="0xtinuviel/Qwen2.5-72B-Instruct-bnb-4bit-Gensyn-Swarm-subtle_rugged_snail", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Kenazin/Llama-3.1-8B-peft-v6-10
Kenazin
2025-05-04T06:28:33Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-04T06:28:30Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Kenazin/Llama-3.1-8B-peft-v6-8
Kenazin
2025-05-04T06:28:14Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-04T06:28:11Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
llc890410/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tiny_lightfooted_bee
llc890410
2025-05-04T06:25:32Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am tiny lightfooted bee", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-28T12:28:10Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tiny_lightfooted_bee tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am tiny lightfooted bee - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tiny_lightfooted_bee This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="llc890410/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tiny_lightfooted_bee", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
brong27/brg
brong27
2025-05-04T06:25:28Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-04T06:25:27Z
--- license: apache-2.0 ---
grok3234/llama_3.2_3b_QA_FineTuned_v2
grok3234
2025-05-04T06:22:44Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T06:20:04Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mohsed/Basalam
mohsed
2025-05-04T06:22:21Z
0
0
null
[ "fa", "license:apache-2.0", "region:us" ]
null
2025-05-04T06:21:26Z
--- license: apache-2.0 license_name: bslmblog license_link: LICENSE language: - fa ---
rishika315/dummy-model
rishika315
2025-05-04T06:16:22Z
0
0
transformers
[ "transformers", "safetensors", "camembert", "fill-mask", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2025-05-04T06:15:28Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
HoaDoan1710/whisper-checkpoint-4525
HoaDoan1710
2025-05-04T06:13:35Z
0
0
null
[ "safetensors", "whisper", "license:apache-2.0", "region:us" ]
null
2025-05-04T06:04:08Z
--- license: apache-2.0 ---
pranitha02/corgy_dog_LoRA
pranitha02
2025-05-04T06:11:46Z
0
0
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2025-05-04T04:16:25Z
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 library_name: diffusers license: openrail++ instance_prompt: watercolor style image widget: [] tags: - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - pranitha02/corgy_dog_LoRA <Gallery /> ## Model description These are pranitha02/corgy_dog_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use watercolor style image to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](pranitha02/corgy_dog_LoRA/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
iecjsu/QWen2.5_VL_7B_IT-FT
iecjsu
2025-05-04T06:10:55Z
0
0
transformers
[ "transformers", "qwen2_5_vl", "feature-extraction", "text-generation-inference", "unsloth", "en", "base_model:unsloth/Qwen2.5-VL-7B-Instruct-unsloth-bnb-4bit", "base_model:finetune:unsloth/Qwen2.5-VL-7B-Instruct-unsloth-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
feature-extraction
2025-05-04T06:05:01Z
--- base_model: unsloth/Qwen2.5-VL-7B-Instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2_5_vl license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** iecjsu - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-VL-7B-Instruct-unsloth-bnb-4bit This qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
1245erty/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-jumping_lithe_scorpion
1245erty
2025-05-04T06:09:34Z
12
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am jumping lithe scorpion", "unsloth", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-20T16:38:45Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-jumping_lithe_scorpion tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am jumping lithe scorpion - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-jumping_lithe_scorpion This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="1245erty/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-jumping_lithe_scorpion", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
RRashmini/google-umt5-small-8
RRashmini
2025-05-04T06:08:30Z
0
0
transformers
[ "transformers", "safetensors", "umt5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2025-05-04T06:07:40Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
outlookAi/QrtYRbP6oM
outlookAi
2025-05-04T06:07:16Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-05-04T05:46:12Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: Dremy Bokeh --- # Qrtyrbp6Om <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `Dremy Bokeh` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "Dremy Bokeh", "lora_weights": "https://huggingface.co/outlookAi/QrtYRbP6oM/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('outlookAi/QrtYRbP6oM', weight_name='lora.safetensors') image = pipeline('Dremy Bokeh').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 1500 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/outlookAi/QrtYRbP6oM/discussions) to add images that show off what you’ve made with this LoRA.
RichardErkhov/tsavage68_-_IE_L3_350steps_1e8rate_03beta_cSFTDPO-gguf
RichardErkhov
2025-05-04T06:06:28Z
0
0
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-04T02:59:08Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) IE_L3_350steps_1e8rate_03beta_cSFTDPO - GGUF - Model creator: https://huggingface.co/tsavage68/ - Original model: https://huggingface.co/tsavage68/IE_L3_350steps_1e8rate_03beta_cSFTDPO/ | Name | Quant method | Size | | ---- | ---- | ---- | | [IE_L3_350steps_1e8rate_03beta_cSFTDPO.Q2_K.gguf](https://huggingface.co/RichardErkhov/tsavage68_-_IE_L3_350steps_1e8rate_03beta_cSFTDPO-gguf/blob/main/IE_L3_350steps_1e8rate_03beta_cSFTDPO.Q2_K.gguf) | Q2_K | 2.96GB | | [IE_L3_350steps_1e8rate_03beta_cSFTDPO.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/tsavage68_-_IE_L3_350steps_1e8rate_03beta_cSFTDPO-gguf/blob/main/IE_L3_350steps_1e8rate_03beta_cSFTDPO.IQ3_XS.gguf) | IQ3_XS | 3.28GB | | [IE_L3_350steps_1e8rate_03beta_cSFTDPO.IQ3_S.gguf](https://huggingface.co/RichardErkhov/tsavage68_-_IE_L3_350steps_1e8rate_03beta_cSFTDPO-gguf/blob/main/IE_L3_350steps_1e8rate_03beta_cSFTDPO.IQ3_S.gguf) | IQ3_S | 3.43GB | | [IE_L3_350steps_1e8rate_03beta_cSFTDPO.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/tsavage68_-_IE_L3_350steps_1e8rate_03beta_cSFTDPO-gguf/blob/main/IE_L3_350steps_1e8rate_03beta_cSFTDPO.Q3_K_S.gguf) | Q3_K_S | 3.41GB | | [IE_L3_350steps_1e8rate_03beta_cSFTDPO.IQ3_M.gguf](https://huggingface.co/RichardErkhov/tsavage68_-_IE_L3_350steps_1e8rate_03beta_cSFTDPO-gguf/blob/main/IE_L3_350steps_1e8rate_03beta_cSFTDPO.IQ3_M.gguf) | IQ3_M | 3.52GB | | [IE_L3_350steps_1e8rate_03beta_cSFTDPO.Q3_K.gguf](https://huggingface.co/RichardErkhov/tsavage68_-_IE_L3_350steps_1e8rate_03beta_cSFTDPO-gguf/blob/main/IE_L3_350steps_1e8rate_03beta_cSFTDPO.Q3_K.gguf) | Q3_K | 3.74GB | | [IE_L3_350steps_1e8rate_03beta_cSFTDPO.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/tsavage68_-_IE_L3_350steps_1e8rate_03beta_cSFTDPO-gguf/blob/main/IE_L3_350steps_1e8rate_03beta_cSFTDPO.Q3_K_M.gguf) | Q3_K_M | 3.74GB | | [IE_L3_350steps_1e8rate_03beta_cSFTDPO.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/tsavage68_-_IE_L3_350steps_1e8rate_03beta_cSFTDPO-gguf/blob/main/IE_L3_350steps_1e8rate_03beta_cSFTDPO.Q3_K_L.gguf) | Q3_K_L | 4.03GB | | [IE_L3_350steps_1e8rate_03beta_cSFTDPO.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/tsavage68_-_IE_L3_350steps_1e8rate_03beta_cSFTDPO-gguf/blob/main/IE_L3_350steps_1e8rate_03beta_cSFTDPO.IQ4_XS.gguf) | IQ4_XS | 4.18GB | | [IE_L3_350steps_1e8rate_03beta_cSFTDPO.Q4_0.gguf](https://huggingface.co/RichardErkhov/tsavage68_-_IE_L3_350steps_1e8rate_03beta_cSFTDPO-gguf/blob/main/IE_L3_350steps_1e8rate_03beta_cSFTDPO.Q4_0.gguf) | Q4_0 | 4.34GB | | [IE_L3_350steps_1e8rate_03beta_cSFTDPO.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/tsavage68_-_IE_L3_350steps_1e8rate_03beta_cSFTDPO-gguf/blob/main/IE_L3_350steps_1e8rate_03beta_cSFTDPO.IQ4_NL.gguf) | IQ4_NL | 4.38GB | | [IE_L3_350steps_1e8rate_03beta_cSFTDPO.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/tsavage68_-_IE_L3_350steps_1e8rate_03beta_cSFTDPO-gguf/blob/main/IE_L3_350steps_1e8rate_03beta_cSFTDPO.Q4_K_S.gguf) | Q4_K_S | 4.37GB | | [IE_L3_350steps_1e8rate_03beta_cSFTDPO.Q4_K.gguf](https://huggingface.co/RichardErkhov/tsavage68_-_IE_L3_350steps_1e8rate_03beta_cSFTDPO-gguf/blob/main/IE_L3_350steps_1e8rate_03beta_cSFTDPO.Q4_K.gguf) | Q4_K | 4.58GB | | [IE_L3_350steps_1e8rate_03beta_cSFTDPO.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/tsavage68_-_IE_L3_350steps_1e8rate_03beta_cSFTDPO-gguf/blob/main/IE_L3_350steps_1e8rate_03beta_cSFTDPO.Q4_K_M.gguf) | Q4_K_M | 4.58GB | | [IE_L3_350steps_1e8rate_03beta_cSFTDPO.Q4_1.gguf](https://huggingface.co/RichardErkhov/tsavage68_-_IE_L3_350steps_1e8rate_03beta_cSFTDPO-gguf/blob/main/IE_L3_350steps_1e8rate_03beta_cSFTDPO.Q4_1.gguf) | Q4_1 | 4.78GB | | [IE_L3_350steps_1e8rate_03beta_cSFTDPO.Q5_0.gguf](https://huggingface.co/RichardErkhov/tsavage68_-_IE_L3_350steps_1e8rate_03beta_cSFTDPO-gguf/blob/main/IE_L3_350steps_1e8rate_03beta_cSFTDPO.Q5_0.gguf) | Q5_0 | 5.21GB | | [IE_L3_350steps_1e8rate_03beta_cSFTDPO.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/tsavage68_-_IE_L3_350steps_1e8rate_03beta_cSFTDPO-gguf/blob/main/IE_L3_350steps_1e8rate_03beta_cSFTDPO.Q5_K_S.gguf) | Q5_K_S | 5.21GB | | [IE_L3_350steps_1e8rate_03beta_cSFTDPO.Q5_K.gguf](https://huggingface.co/RichardErkhov/tsavage68_-_IE_L3_350steps_1e8rate_03beta_cSFTDPO-gguf/blob/main/IE_L3_350steps_1e8rate_03beta_cSFTDPO.Q5_K.gguf) | Q5_K | 5.34GB | | [IE_L3_350steps_1e8rate_03beta_cSFTDPO.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/tsavage68_-_IE_L3_350steps_1e8rate_03beta_cSFTDPO-gguf/blob/main/IE_L3_350steps_1e8rate_03beta_cSFTDPO.Q5_K_M.gguf) | Q5_K_M | 5.34GB | | [IE_L3_350steps_1e8rate_03beta_cSFTDPO.Q5_1.gguf](https://huggingface.co/RichardErkhov/tsavage68_-_IE_L3_350steps_1e8rate_03beta_cSFTDPO-gguf/blob/main/IE_L3_350steps_1e8rate_03beta_cSFTDPO.Q5_1.gguf) | Q5_1 | 5.65GB | | [IE_L3_350steps_1e8rate_03beta_cSFTDPO.Q6_K.gguf](https://huggingface.co/RichardErkhov/tsavage68_-_IE_L3_350steps_1e8rate_03beta_cSFTDPO-gguf/blob/main/IE_L3_350steps_1e8rate_03beta_cSFTDPO.Q6_K.gguf) | Q6_K | 6.14GB | | [IE_L3_350steps_1e8rate_03beta_cSFTDPO.Q8_0.gguf](https://huggingface.co/RichardErkhov/tsavage68_-_IE_L3_350steps_1e8rate_03beta_cSFTDPO-gguf/blob/main/IE_L3_350steps_1e8rate_03beta_cSFTDPO.Q8_0.gguf) | Q8_0 | 7.95GB | Original model description: --- library_name: transformers license: llama3 base_model: tsavage68/IE_L3_1000steps_1e6rate_SFT tags: - trl - dpo - generated_from_trainer model-index: - name: IE_L3_350steps_1e8rate_03beta_cSFTDPO results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # IE_L3_350steps_1e8rate_03beta_cSFTDPO This model is a fine-tuned version of [tsavage68/IE_L3_1000steps_1e6rate_SFT](https://huggingface.co/tsavage68/IE_L3_1000steps_1e6rate_SFT) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6896 - Rewards/chosen: -0.0071 - Rewards/rejected: -0.0198 - Rewards/accuracies: 0.4400 - Rewards/margins: 0.0127 - Logps/rejected: -75.6932 - Logps/chosen: -82.8214 - Logits/rejected: -0.7977 - Logits/chosen: -0.7408 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-08 - train_batch_size: 2 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 350 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.6912 | 0.4 | 50 | 0.6940 | -0.0075 | -0.0104 | 0.4000 | 0.0029 | -75.6618 | -82.8226 | -0.7964 | -0.7393 | | 0.6947 | 0.8 | 100 | 0.6925 | 0.0014 | -0.0057 | 0.3850 | 0.0070 | -75.6461 | -82.7931 | -0.7963 | -0.7394 | | 0.6881 | 1.2 | 150 | 0.7003 | -0.0102 | -0.0020 | 0.375 | -0.0082 | -75.6340 | -82.8318 | -0.7969 | -0.7398 | | 0.6776 | 1.6 | 200 | 0.6938 | -0.0057 | -0.0098 | 0.375 | 0.0041 | -75.6601 | -82.8168 | -0.7970 | -0.7399 | | 0.6859 | 2.0 | 250 | 0.6850 | -0.0033 | -0.0250 | 0.4350 | 0.0217 | -75.7105 | -82.8087 | -0.7975 | -0.7405 | | 0.7024 | 2.4 | 300 | 0.6893 | -0.0075 | -0.0207 | 0.4400 | 0.0132 | -75.6964 | -82.8228 | -0.7977 | -0.7408 | | 0.6802 | 2.8 | 350 | 0.6896 | -0.0071 | -0.0198 | 0.4400 | 0.0127 | -75.6932 | -82.8214 | -0.7977 | -0.7408 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.0.0+cu117 - Datasets 3.0.0 - Tokenizers 0.19.1
Lahinthefutureland/wan-toffee
Lahinthefutureland
2025-05-04T06:06:14Z
0
1
null
[ "license:apache-2.0", "region:us" ]
null
2025-04-09T19:32:49Z
--- license: apache-2.0 ---
mradermacher/Alkahest-V9.3-LLaMa-70B-i1-GGUF
mradermacher
2025-05-04T06:00:45Z
0
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:TareksTesting/Alkahest-V9.3-LLaMa-70B", "base_model:quantized:TareksTesting/Alkahest-V9.3-LLaMa-70B", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-05-03T08:05:45Z
--- base_model: TareksTesting/Alkahest-V9.3-LLaMa-70B language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/TareksTesting/Alkahest-V9.3-LLaMa-70B <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Alkahest-V9.3-LLaMa-70B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.3-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V9.3-LLaMa-70B.i1-IQ1_S.gguf) | i1-IQ1_S | 15.4 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.3-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V9.3-LLaMa-70B.i1-IQ1_M.gguf) | i1-IQ1_M | 16.9 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.3-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V9.3-LLaMa-70B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 19.2 | | | [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.3-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V9.3-LLaMa-70B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 21.2 | | | [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.3-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V9.3-LLaMa-70B.i1-IQ2_S.gguf) | i1-IQ2_S | 22.3 | | | [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.3-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V9.3-LLaMa-70B.i1-IQ2_M.gguf) | i1-IQ2_M | 24.2 | | | [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.3-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V9.3-LLaMa-70B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 24.6 | very low quality | | [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.3-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V9.3-LLaMa-70B.i1-Q2_K.gguf) | i1-Q2_K | 26.5 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.3-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V9.3-LLaMa-70B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.3-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V9.3-LLaMa-70B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 29.4 | | | [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.3-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V9.3-LLaMa-70B.i1-IQ3_S.gguf) | i1-IQ3_S | 31.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.3-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V9.3-LLaMa-70B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 31.0 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.3-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V9.3-LLaMa-70B.i1-IQ3_M.gguf) | i1-IQ3_M | 32.0 | | | [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.3-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V9.3-LLaMa-70B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 34.4 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.3-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V9.3-LLaMa-70B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 37.2 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.3-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V9.3-LLaMa-70B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 38.0 | | | [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.3-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V9.3-LLaMa-70B.i1-Q4_0.gguf) | i1-Q4_0 | 40.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.3-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V9.3-LLaMa-70B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 40.4 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.3-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V9.3-LLaMa-70B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 42.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.3-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V9.3-LLaMa-70B.i1-Q4_1.gguf) | i1-Q4_1 | 44.4 | | | [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.3-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V9.3-LLaMa-70B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 48.8 | | | [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.3-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V9.3-LLaMa-70B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 50.1 | | | [PART 1](https://huggingface.co/mradermacher/Alkahest-V9.3-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V9.3-LLaMa-70B.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Alkahest-V9.3-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V9.3-LLaMa-70B.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 58.0 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
ohsk/welfare
ohsk
2025-05-04T05:56:04Z
0
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-05-04T04:13:37Z
--- base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** ohsk - **License:** apache-2.0 - **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
mlfoundations-dev/d1_math_shortest_10k
mlfoundations-dev
2025-05-04T05:55:54Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "llama-factory", "full", "generated_from_trainer", "conversational", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-7B-Instruct", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T12:44:32Z
--- library_name: transformers license: apache-2.0 base_model: Qwen/Qwen2.5-7B-Instruct tags: - llama-factory - full - generated_from_trainer model-index: - name: d1_math_shortest_10k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # d1_math_shortest_10k This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/d1_math_shortest_10k dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - total_eval_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.46.1 - Pytorch 2.6.0+cu124 - Datasets 3.1.0 - Tokenizers 0.20.3
DevQuasar/kyutai.helium-1-2b-wiki-GGUF
DevQuasar
2025-05-04T05:55:32Z
0
0
null
[ "gguf", "text-generation", "base_model:kyutai/helium-1-2b-wiki", "base_model:quantized:kyutai/helium-1-2b-wiki", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T05:41:52Z
--- base_model: - kyutai/helium-1-2b-wiki pipeline_tag: text-generation --- [<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com) Quantized version of: [kyutai/helium-1-2b-wiki](https://huggingface.co/kyutai/helium-1-2b-wiki) 'Make knowledge free for everyone' <p align="center"> Made with <br> <a href="https://www.civo.com/" target="_blank"> <img src="https://www.civo.com/assets/public/brand-assets/civo-logo-colour-60cc1622dedf346f7afde1fff760523f731b0aac106a5465af98ff4073114b74.svg" width="100"/> </a> </p> <a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
ail-sa/kevin_plus_medium_fs_v1
ail-sa
2025-05-04T05:55:24Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-05-04T05:22:10Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: Sid --- # Kevin_Plus_Medium_Fs_V1 <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `Sid` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "Sid", "lora_weights": "https://huggingface.co/ail-sa/kevin_plus_medium_fs_v1/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('ail-sa/kevin_plus_medium_fs_v1', weight_name='lora.safetensors') image = pipeline('Sid').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/ail-sa/kevin_plus_medium_fs_v1/discussions) to add images that show off what you’ve made with this LoRA.
loris3/stratified_10m_curriculum_roberta_roberta_incr_influence_epoch_repetition
loris3
2025-05-04T05:53:14Z
0
0
transformers
[ "transformers", "safetensors", "roberta", "fill-mask", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2025-05-04T00:43:48Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]