modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-25 06:27:54
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
495 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-25 06:24:22
card
stringlengths
11
1.01M
dimasik1987/4c92d528-2a22-4204-90b0-1423510f0988
dimasik1987
2025-04-30T11:49:38Z
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:heegyu/WizardVicuna2-13b-hf", "base_model:adapter:heegyu/WizardVicuna2-13b-hf", "4-bit", "bitsandbytes", "region:us" ]
null
2025-04-30T11:35:31Z
--- library_name: peft base_model: heegyu/WizardVicuna2-13b-hf tags: - axolotl - generated_from_trainer model-index: - name: 4c92d528-2a22-4204-90b0-1423510f0988 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml absolute_data_files: false adapter: lora base_model: heegyu/WizardVicuna2-13b-hf bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - 05a1d5d398a81bd6_train_data.json ds_type: json format: custom path: /workspace/input_data/05a1d5d398a81bd6_train_data.json type: field_input: test field_instruction: question field_output: solution format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.55 group_by_length: false hub_model_id: dimasik1987/4c92d528-2a22-4204-90b0-1423510f0988 hub_repo: null hub_strategy: end hub_token: null learning_rate: 1.0e-06 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 150 micro_batch_size: 10 mixed_precision: bf16 mlflow_experiment_name: /tmp/05a1d5d398a81bd6_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 2048 special_tokens: pad_token: </s> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: fd1dd4a2-11ce-46e2-8594-291f6e26aaab wandb_project: s56-7 wandb_run: your_name wandb_runid: fd1dd4a2-11ce-46e2-8594-291f6e26aaab warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 4c92d528-2a22-4204-90b0-1423510f0988 This model is a fine-tuned version of [heegyu/WizardVicuna2-13b-hf](https://huggingface.co/heegyu/WizardVicuna2-13b-hf) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5824 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 150 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.6547 | 0.1754 | 150 | 0.5824 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
DrTiagoSaldanha/ssssss
DrTiagoSaldanha
2025-04-30T11:48:15Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-04-30T11:48:15Z
--- license: apache-2.0 ---
kokovova/c319b9cb-0531-4e57-afef-c899e662dda4
kokovova
2025-04-30T11:47:57Z
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:heegyu/WizardVicuna2-13b-hf", "base_model:adapter:heegyu/WizardVicuna2-13b-hf", "4-bit", "bitsandbytes", "region:us" ]
null
2025-04-30T11:35:12Z
--- library_name: peft base_model: heegyu/WizardVicuna2-13b-hf tags: - axolotl - generated_from_trainer model-index: - name: c319b9cb-0531-4e57-afef-c899e662dda4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: heegyu/WizardVicuna2-13b-hf bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - 05a1d5d398a81bd6_train_data.json ds_type: json format: custom path: /workspace/input_data/05a1d5d398a81bd6_train_data.json type: field_input: test field_instruction: question field_output: solution format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.5 group_by_length: false hub_model_id: kokovova/c319b9cb-0531-4e57-afef-c899e662dda4 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-06 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/05a1d5d398a81bd6_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 special_tokens: pad_token: </s> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: fd1dd4a2-11ce-46e2-8594-291f6e26aaab wandb_project: s56-4 wandb_run: your_name wandb_runid: fd1dd4a2-11ce-46e2-8594-291f6e26aaab warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # c319b9cb-0531-4e57-afef-c899e662dda4 This model is a fine-tuned version of [heegyu/WizardVicuna2-13b-hf](https://huggingface.co/heegyu/WizardVicuna2-13b-hf) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3847 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.4169 | 0.1871 | 200 | 0.3847 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
firoz123/gemma3-lora-gguf
firoz123
2025-04-30T11:47:00Z
0
0
transformers
[ "transformers", "safetensors", "gguf", "text-generation-inference", "unsloth", "gemma3_text", "trl", "en", "base_model:unsloth/gemma-3-1b-it", "base_model:quantized:unsloth/gemma-3-1b-it", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-04-30T11:46:33Z
--- base_model: unsloth/gemma-3-1b-it tags: - text-generation-inference - transformers - unsloth - gemma3_text - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** firoz123 - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-3-1b-it This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
firoz123/gemma3-gguf
firoz123
2025-04-30T11:44:18Z
0
0
transformers
[ "transformers", "safetensors", "gguf", "gemma3_text", "text-generation", "conversational", "arxiv:1905.07830", "arxiv:1905.10044", "arxiv:1911.11641", "arxiv:1904.09728", "arxiv:1705.03551", "arxiv:1911.01547", "arxiv:1907.10641", "arxiv:1903.00161", "arxiv:2009.03300", "arxiv:2304.06364", "arxiv:2103.03874", "arxiv:2110.14168", "arxiv:2311.12022", "arxiv:2108.07732", "arxiv:2107.03374", "arxiv:2210.03057", "arxiv:2106.03193", "arxiv:1910.11856", "arxiv:2502.12404", "arxiv:2502.21228", "arxiv:2404.16816", "arxiv:2104.12756", "arxiv:2311.16502", "arxiv:2203.10244", "arxiv:2404.12390", "arxiv:1810.12440", "arxiv:1908.02660", "arxiv:2312.11805", "base_model:google/gemma-3-1b-pt", "base_model:quantized:google/gemma-3-1b-pt", "license:gemma", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T11:41:44Z
--- license: gemma library_name: transformers pipeline_tag: text-generation extra_gated_heading: Access Gemma on Hugging Face extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging Face and click below. Requests are processed immediately. extra_gated_button_content: Acknowledge license base_model: google/gemma-3-1b-pt --- # Gemma 3 model card **Model Page**: [Gemma](https://ai.google.dev/gemma/docs/core) **Resources and Technical Documentation**: * [Gemma 3 Technical Report][g3-tech-report] * [Responsible Generative AI Toolkit][rai-toolkit] * [Gemma on Kaggle][kaggle-gemma] * [Gemma on Vertex Model Garden][vertex-mg-gemma3] **Terms of Use**: [Terms][terms] **Authors**: Google DeepMind ## Model Information Summary description and brief definition of inputs and outputs. ### Description Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. Gemma 3 models are multimodal, handling text and image input and generating text output, with open weights for both pre-trained variants and instruction-tuned variants. Gemma 3 has a large, 128K context window, multilingual support in over 140 languages, and is available in more sizes than previous versions. Gemma 3 models are well-suited for a variety of text generation and image understanding tasks, including question answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as laptops, desktops or your own cloud infrastructure, democratizing access to state of the art AI models and helping foster innovation for everyone. ### Inputs and outputs - **Input:** - Text string, such as a question, a prompt, or a document to be summarized - Images, normalized to 896 x 896 resolution and encoded to 256 tokens each - Total input context of 128K tokens for the 4B, 12B, and 27B sizes, and 32K tokens for the 1B size - **Output:** - Generated text in response to the input, such as an answer to a question, analysis of image content, or a summary of a document - Total output context of 8192 tokens ### Usage Below, there are some code snippets on how to get quickly started with running the model. First, install the Transformers library. Gemma 3 is supported starting from transformers 4.50.0. ```sh $ pip install -U transformers ``` Then, copy the snippet from the section that is relevant for your use case. #### Running with the `pipeline` API With instruction-tuned models, you need to use chat templates to process our inputs first. Then, you can pass it to the pipeline. ```python from transformers import pipeline import torch pipe = pipeline("text-generation", model="google/gemma-3-1b-it", device="cuda", torch_dtype=torch.bfloat16) messages = [ [ { "role": "system", "content": [{"type": "text", "text": "You are a helpful assistant."},] }, { "role": "user", "content": [{"type": "text", "text": "Write a poem on Hugging Face, the company"},] }, ], ] output = pipe(messages, max_new_tokens=50) ``` #### Running the model on a single / multi GPU ```python from transformers import AutoTokenizer, BitsAndBytesConfig, Gemma3ForCausalLM import torch model_id = "google/gemma-3-1b-it" quantization_config = BitsAndBytesConfig(load_in_8bit=True) model = Gemma3ForCausalLM.from_pretrained( model_id, quantization_config=quantization_config ).eval() tokenizer = AutoTokenizer.from_pretrained(model_id) messages = [ [ { "role": "system", "content": [{"type": "text", "text": "You are a helpful assistant."},] }, { "role": "user", "content": [{"type": "text", "text": "Write a poem on Hugging Face, the company"},] }, ], ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device).to(torch.bfloat16) with torch.inference_mode(): outputs = model.generate(**inputs, max_new_tokens=64) outputs = tokenizer.batch_decode(outputs) ``` ### Citation ```none @article{gemma_2025, title={Gemma 3}, url={https://goo.gle/Gemma3Report}, publisher={Kaggle}, author={Gemma Team}, year={2025} } ``` ## Model Data Data used for model training and how the data was processed. ### Training Dataset These models were trained on a dataset of text data that includes a wide variety of sources. The 27B model was trained with 14 trillion tokens, the 12B model was trained with 12 trillion tokens, 4B model was trained with 4 trillion tokens and 1B with 2 trillion tokens. Here are the key components: - Web Documents: A diverse collection of web text ensures the model is exposed to a broad range of linguistic styles, topics, and vocabulary. The training dataset includes content in over 140 languages. - Code: Exposing the model to code helps it to learn the syntax and patterns of programming languages, which improves its ability to generate code and understand code-related questions. - Mathematics: Training on mathematical text helps the model learn logical reasoning, symbolic representation, and to address mathematical queries. - Images: A wide range of images enables the model to perform image analysis and visual data extraction tasks. The combination of these diverse data sources is crucial for training a powerful multimodal model that can handle a wide variety of different tasks and data formats. ### Data Preprocessing Here are the key data cleaning and filtering methods applied to the training data: - CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was applied at multiple stages in the data preparation process to ensure the exclusion of harmful and illegal content. - Sensitive Data Filtering: As part of making Gemma pre-trained models safe and reliable, automated techniques were used to filter out certain personal information and other sensitive data from training sets. - Additional methods: Filtering based on content quality and safety in line with [our policies][safety-policies]. ## Implementation Information Details about the model internals. ### Hardware Gemma was trained using [Tensor Processing Unit (TPU)][tpu] hardware (TPUv4p, TPUv5p and TPUv5e). Training vision-language models (VLMS) requires significant computational power. TPUs, designed specifically for matrix operations common in machine learning, offer several advantages in this domain: - Performance: TPUs are specifically designed to handle the massive computations involved in training VLMs. They can speed up training considerably compared to CPUs. - Memory: TPUs often come with large amounts of high-bandwidth memory, allowing for the handling of large models and batch sizes during training. This can lead to better model quality. - Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for handling the growing complexity of large foundation models. You can distribute training across multiple TPU devices for faster and more efficient processing. - Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective solution for training large models compared to CPU-based infrastructure, especially when considering the time and resources saved due to faster training. - These advantages are aligned with [Google's commitments to operate sustainably][sustainability]. ### Software Training was done using [JAX][jax] and [ML Pathways][ml-pathways]. JAX allows researchers to take advantage of the latest generation of hardware, including TPUs, for faster and more efficient training of large models. ML Pathways is Google's latest effort to build artificially intelligent systems capable of generalizing across multiple tasks. This is specially suitable for foundation models, including large language models like these ones. Together, JAX and ML Pathways are used as described in the [paper about the Gemini family of models][gemini-2-paper]; *"the 'single controller' programming model of Jax and Pathways allows a single Python process to orchestrate the entire training run, dramatically simplifying the development workflow."* ## Evaluation Model evaluation metrics and results. ### Benchmark Results These models were evaluated against a large collection of different datasets and metrics to cover different aspects of text generation: #### Reasoning and factuality | Benchmark | Metric | Gemma 3 PT 1B | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B | | ------------------------------ |----------------|:--------------:|:-------------:|:--------------:|:--------------:| | [HellaSwag][hellaswag] | 10-shot | 62.3 | 77.2 | 84.2 | 85.6 | | [BoolQ][boolq] | 0-shot | 63.2 | 72.3 | 78.8 | 82.4 | | [PIQA][piqa] | 0-shot | 73.8 | 79.6 | 81.8 | 83.3 | | [SocialIQA][socialiqa] | 0-shot | 48.9 | 51.9 | 53.4 | 54.9 | | [TriviaQA][triviaqa] | 5-shot | 39.8 | 65.8 | 78.2 | 85.5 | | [Natural Questions][naturalq] | 5-shot | 9.48 | 20.0 | 31.4 | 36.1 | | [ARC-c][arc] | 25-shot | 38.4 | 56.2 | 68.9 | 70.6 | | [ARC-e][arc] | 0-shot | 73.0 | 82.4 | 88.3 | 89.0 | | [WinoGrande][winogrande] | 5-shot | 58.2 | 64.7 | 74.3 | 78.8 | | [BIG-Bench Hard][bbh] | few-shot | 28.4 | 50.9 | 72.6 | 77.7 | | [DROP][drop] | 1-shot | 42.4 | 60.1 | 72.2 | 77.2 | [hellaswag]: https://arxiv.org/abs/1905.07830 [boolq]: https://arxiv.org/abs/1905.10044 [piqa]: https://arxiv.org/abs/1911.11641 [socialiqa]: https://arxiv.org/abs/1904.09728 [triviaqa]: https://arxiv.org/abs/1705.03551 [naturalq]: https://github.com/google-research-datasets/natural-questions [arc]: https://arxiv.org/abs/1911.01547 [winogrande]: https://arxiv.org/abs/1907.10641 [bbh]: https://paperswithcode.com/dataset/bbh [drop]: https://arxiv.org/abs/1903.00161 #### STEM and code | Benchmark | Metric | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B | | ------------------------------ |----------------|:-------------:|:--------------:|:--------------:| | [MMLU][mmlu] | 5-shot | 59.6 | 74.5 | 78.6 | | [MMLU][mmlu] (Pro COT) | 5-shot | 29.2 | 45.3 | 52.2 | | [AGIEval][agieval] | 3-5-shot | 42.1 | 57.4 | 66.2 | | [MATH][math] | 4-shot | 24.2 | 43.3 | 50.0 | | [GSM8K][gsm8k] | 8-shot | 38.4 | 71.0 | 82.6 | | [GPQA][gpqa] | 5-shot | 15.0 | 25.4 | 24.3 | | [MBPP][mbpp] | 3-shot | 46.0 | 60.4 | 65.6 | | [HumanEval][humaneval] | 0-shot | 36.0 | 45.7 | 48.8 | [mmlu]: https://arxiv.org/abs/2009.03300 [agieval]: https://arxiv.org/abs/2304.06364 [math]: https://arxiv.org/abs/2103.03874 [gsm8k]: https://arxiv.org/abs/2110.14168 [gpqa]: https://arxiv.org/abs/2311.12022 [mbpp]: https://arxiv.org/abs/2108.07732 [humaneval]: https://arxiv.org/abs/2107.03374 #### Multilingual | Benchmark | Gemma 3 PT 1B | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B | | ------------------------------------ |:-------------:|:-------------:|:--------------:|:--------------:| | [MGSM][mgsm] | 2.04 | 34.7 | 64.3 | 74.3 | | [Global-MMLU-Lite][global-mmlu-lite] | 24.9 | 57.0 | 69.4 | 75.7 | | [WMT24++][wmt24pp] (ChrF) | 36.7 | 48.4 | 53.9 | 55.7 | | [FloRes][flores] | 29.5 | 39.2 | 46.0 | 48.8 | | [XQuAD][xquad] (all) | 43.9 | 68.0 | 74.5 | 76.8 | | [ECLeKTic][eclektic] | 4.69 | 11.0 | 17.2 | 24.4 | | [IndicGenBench][indicgenbench] | 41.4 | 57.2 | 61.7 | 63.4 | [mgsm]: https://arxiv.org/abs/2210.03057 [flores]: https://arxiv.org/abs/2106.03193 [xquad]: https://arxiv.org/abs/1910.11856v3 [global-mmlu-lite]: https://huggingface.co/datasets/CohereForAI/Global-MMLU-Lite [wmt24pp]: https://arxiv.org/abs/2502.12404v1 [eclektic]: https://arxiv.org/abs/2502.21228 [indicgenbench]: https://arxiv.org/abs/2404.16816 #### Multimodal | Benchmark | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B | | ------------------------------ |:-------------:|:--------------:|:--------------:| | [COCOcap][coco-cap] | 102 | 111 | 116 | | [DocVQA][docvqa] (val) | 72.8 | 82.3 | 85.6 | | [InfoVQA][info-vqa] (val) | 44.1 | 54.8 | 59.4 | | [MMMU][mmmu] (pt) | 39.2 | 50.3 | 56.1 | | [TextVQA][textvqa] (val) | 58.9 | 66.5 | 68.6 | | [RealWorldQA][realworldqa] | 45.5 | 52.2 | 53.9 | | [ReMI][remi] | 27.3 | 38.5 | 44.8 | | [AI2D][ai2d] | 63.2 | 75.2 | 79.0 | | [ChartQA][chartqa] | 63.6 | 74.7 | 76.3 | | [VQAv2][vqav2] | 63.9 | 71.2 | 72.9 | | [BLINK][blinkvqa] | 38.0 | 35.9 | 39.6 | | [OKVQA][okvqa] | 51.0 | 58.7 | 60.2 | | [TallyQA][tallyqa] | 42.5 | 51.8 | 54.3 | | [SpatialSense VQA][ss-vqa] | 50.9 | 60.0 | 59.4 | | [CountBenchQA][countbenchqa] | 26.1 | 17.8 | 68.0 | [coco-cap]: https://cocodataset.org/#home [docvqa]: https://www.docvqa.org/ [info-vqa]: https://arxiv.org/abs/2104.12756 [mmmu]: https://arxiv.org/abs/2311.16502 [textvqa]: https://textvqa.org/ [realworldqa]: https://paperswithcode.com/dataset/realworldqa [remi]: https://arxiv.org/html/2406.09175v1 [ai2d]: https://allenai.org/data/diagrams [chartqa]: https://arxiv.org/abs/2203.10244 [vqav2]: https://visualqa.org/index.html [blinkvqa]: https://arxiv.org/abs/2404.12390 [okvqa]: https://okvqa.allenai.org/ [tallyqa]: https://arxiv.org/abs/1810.12440 [ss-vqa]: https://arxiv.org/abs/1908.02660 [countbenchqa]: https://github.com/google-research/big_vision/blob/main/big_vision/datasets/countbenchqa/ ## Ethics and Safety Ethics and safety evaluation approach and results. ### Evaluation Approach Our evaluation methods include structured evaluations and internal red-teaming testing of relevant content policies. Red-teaming was conducted by a number of different teams, each with different goals and human evaluation metrics. These models were evaluated against a number of different categories relevant to ethics and safety, including: - **Child Safety**: Evaluation of text-to-text and image to text prompts covering child safety policies, including child sexual abuse and exploitation. - **Content Safety:** Evaluation of text-to-text and image to text prompts covering safety policies including, harassment, violence and gore, and hate speech. - **Representational Harms**: Evaluation of text-to-text and image to text prompts covering safety policies including bias, stereotyping, and harmful associations or inaccuracies. In addition to development level evaluations, we conduct "assurance evaluations" which are our 'arms-length' internal evaluations for responsibility governance decision making. They are conducted separately from the model development team, to inform decision making about release. High level findings are fed back to the model team, but prompt sets are held-out to prevent overfitting and preserve the results' ability to inform decision making. Assurance evaluation results are reported to our Responsibility & Safety Council as part of release review. ### Evaluation Results For all areas of safety testing, we saw major improvements in the categories of child safety, content safety, and representational harms relative to previous Gemma models. All testing was conducted without safety filters to evaluate the model capabilities and behaviors. For both text-to-text and image-to-text, and across all model sizes, the model produced minimal policy violations, and showed significant improvements over previous Gemma models' performance with respect to ungrounded inferences. A limitation of our evaluations was they included only English language prompts. ## Usage and Limitations These models have certain limitations that users should be aware of. ### Intended Usage Open vision-language models (VLMs) models have a wide range of applications across various industries and domains. The following list of potential uses is not comprehensive. The purpose of this list is to provide contextual information about the possible use-cases that the model creators considered as part of model training and development. - Content Creation and Communication - Text Generation: These models can be used to generate creative text formats such as poems, scripts, code, marketing copy, and email drafts. - Chatbots and Conversational AI: Power conversational interfaces for customer service, virtual assistants, or interactive applications. - Text Summarization: Generate concise summaries of a text corpus, research papers, or reports. - Image Data Extraction: These models can be used to extract, interpret, and summarize visual data for text communications. - Research and Education - Natural Language Processing (NLP) and VLM Research: These models can serve as a foundation for researchers to experiment with VLM and NLP techniques, develop algorithms, and contribute to the advancement of the field. - Language Learning Tools: Support interactive language learning experiences, aiding in grammar correction or providing writing practice. - Knowledge Exploration: Assist researchers in exploring large bodies of text by generating summaries or answering questions about specific topics. ### Limitations - Training Data - The quality and diversity of the training data significantly influence the model's capabilities. Biases or gaps in the training data can lead to limitations in the model's responses. - The scope of the training dataset determines the subject areas the model can handle effectively. - Context and Task Complexity - Models are better at tasks that can be framed with clear prompts and instructions. Open-ended or highly complex tasks might be challenging. - A model's performance can be influenced by the amount of context provided (longer context generally leads to better outputs, up to a certain point). - Language Ambiguity and Nuance - Natural language is inherently complex. Models might struggle to grasp subtle nuances, sarcasm, or figurative language. - Factual Accuracy - Models generate responses based on information they learned from their training datasets, but they are not knowledge bases. They may generate incorrect or outdated factual statements. - Common Sense - Models rely on statistical patterns in language. They might lack the ability to apply common sense reasoning in certain situations. ### Ethical Considerations and Risks The development of vision-language models (VLMs) raises several ethical concerns. In creating an open model, we have carefully considered the following: - Bias and Fairness - VLMs trained on large-scale, real-world text and image data can reflect socio-cultural biases embedded in the training material. These models underwent careful scrutiny, input data pre-processing described and posterior evaluations reported in this card. - Misinformation and Misuse - VLMs can be misused to generate text that is false, misleading, or harmful. - Guidelines are provided for responsible use with the model, see the [Responsible Generative AI Toolkit][rai-toolkit]. - Transparency and Accountability: - This model card summarizes details on the models' architecture, capabilities, limitations, and evaluation processes. - A responsibly developed open model offers the opportunity to share innovation by making VLM technology accessible to developers and researchers across the AI ecosystem. Risks identified and mitigations: - **Perpetuation of biases**: It's encouraged to perform continuous monitoring (using evaluation metrics, human review) and the exploration of de-biasing techniques during model training, fine-tuning, and other use cases. - **Generation of harmful content**: Mechanisms and guidelines for content safety are essential. Developers are encouraged to exercise caution and implement appropriate content safety safeguards based on their specific product policies and application use cases. - **Misuse for malicious purposes**: Technical limitations and developer and end-user education can help mitigate against malicious applications of VLMs. Educational resources and reporting mechanisms for users to flag misuse are provided. Prohibited uses of Gemma models are outlined in the [Gemma Prohibited Use Policy][prohibited-use]. - **Privacy violations**: Models were trained on data filtered for removal of certain personal information and other sensitive data. Developers are encouraged to adhere to privacy regulations with privacy-preserving techniques. ### Benefits At the time of release, this family of models provides high-performance open vision-language model implementations designed from the ground up for responsible AI development compared to similarly sized models. Using the benchmark evaluation metrics described in this document, these models have shown to provide superior performance to other, comparably-sized open model alternatives. [g3-tech-report]: https://goo.gle/Gemma3Report [rai-toolkit]: https://ai.google.dev/responsible [kaggle-gemma]: https://www.kaggle.com/models/google/gemma-3 [vertex-mg-gemma3]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/gemma3 [terms]: https://ai.google.dev/gemma/terms [safety-policies]: https://ai.google/static/documents/ai-responsibility-update-published-february-2025.pdf [prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy [tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu [sustainability]: https://sustainability.google/operating-sustainably/ [jax]: https://github.com/jax-ml/jax [ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/ [sustainability]: https://sustainability.google/operating-sustainably/ [gemini-2-paper]: https://arxiv.org/abs/2312.11805
dimpac1/ETLGen-Llama-3.1-8B
dimpac1
2025-04-30T11:39:45Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-04-30T11:39:35Z
--- base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** dimpac1 - **License:** apache-2.0 - **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
infly/inf-o1-pi0
infly
2025-04-30T11:36:28Z
5
6
transformers
[ "transformers", "safetensors", "zho", "eng", "fra", "spa", "por", "deu", "ita", "rus", "jpn", "kor", "vie", "tha", "ara", "base_model:Qwen/Qwen2.5-32B-Instruct", "base_model:finetune:Qwen/Qwen2.5-32B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-01-03T02:55:32Z
--- library_name: transformers base_model: Qwen/Qwen2.5-32B-Instruct language: - zho - eng - fra - spa - por - deu - ita - rus - jpn - kor - vie - tha - ara --- <div align="center"> <img src="INF.jpg" width="300"/> 🤗 <a href="https://huggingface.co/infly" target="_blank">Hugging Face</a> <br> <a href="https://inftech-pi-zero.github.io/" target="_blank">Github</a> <br> <br> <br> </div> <div align="center"> <h1>INF-o1-pi0: Initiating the Journey to the Infinity of LLM Reasoning</h1> <p>INF AI specializes in foundational large language model technology and applications. We develop trustworthy vertical-domain models and AI-native solutions tailored to industry needs. Our team of expert AI scientists and industry leaders focuses on practical "gray-box" technologies, unlocking the productivity of large language models to drive innovation across sectors. Our mission in the INF-o1 project is to enhance the reasoning capabilities of LLMs across various industrial domains and ensure a trustworthy reasoning process to serve industry needs.</p> <p>INFLY TECH (Shanghai) Co., Ltd.</p> <p>2024.12.31</p> </div> ## Overview We are pleased to share the initial checkpoint of our reasoning foundation large language model as an open-source resource. This checkpoint is intended to help evaluate our team's data production pipeline across various domains, including mathematics, programming, logic, safety, and others. Its goal is to provide a solid starting point for developing a robust policy for the subsequent reinforcement learning process. We are hopeful that applying our reinforcement learning algorithms, supported by our carefully designed infrastructure, will lead to meaningful improvements in the model’s reasoning capabilities across various domains. At the heart of the project is our data production pipeline, which we believe plays a crucial role in enabling general reasoning capabilities. We also believe that the reasoning capability induced by the data production pipline can address a range of real-world industrial scenarios with increasing precision and reliability. Based on our observations during the production of pi0, we have identified quality and diversity as critical factors for fostering high-quality, long Chain-of-Thought (CoT) reasoning capabilities. This insight aligns closely with conclusions drawn from the general alignment process of large language models. By meticulously designing self-verification and backtracking mechanisms to ensure process correctness in data generation, we have developed datasets that effectively induce robust long-context reasoning across diverse domains. This approach demonstrates superior performance compared to state-of-the-art o1-lile models with similar objectives, highlighting the potential of our data production pipline in advancing reasoning capabilities. ## Experiments ### Math Benchmarks | Model | College Math | AMC23 | MATH | Olympiad Bench | GaoKao 2023 En | AIME24 | | ---------------------- | ------------ | ----- | ----- | --------------- | -------------- | ------ | | Qwen2.5-32B-Instruct | 45.71 | 72.5 | 82.82 | 46.81 | 68.83 | 23.33 | | Qwen2.5-32B-QwQ | 43.33 | 72.5 | 88.54 | 55.56 | 78.70 | 40.00 | | INF-o1-pi0 | 47.27 | 85.0 | 88.60 | 56.00 | 77.14 | 40.00 | ### Logical Benchmark | Model | lsat | | ----------------- | :---: | | Qwen2.5-32B-Instruct | 33.7 | Qwen2.5-32B-QwQ | 67.0 | | INF-o1-pi0 | 71.8 | ### Safety Benchmarks | Model | AIR-BENCH 2024 | AIR-BENCH 2024(CRF) | | ----------------- | :---: | :---: | | Qwen2.5-32B-Instruct | 54.29 | 53.83 | | Qwen2.5-32B-QwQ | 52.61 | 53.42 | | o1-preview | 73.25 | 70.72 | | INF-o1-pi0 | 77.25 | 74.49 | ### SQL Benchmarks | Model | bird | spider | | ----------------- | :---: | :---: | | Qwen2.5-32B-Instruct | 50.2 | 77.8 | | Qwen2.5-32B-QwQ | 43.7 | 69.9 | | o1-preview | 48.9 | 70.6 | | INF-o1-pi0 | 55.3 | 79.7 | ## Quick Start We provide an example usage of the inf-o1-pi0 below. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "infly/inf-o1-pi0" model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained(model_name) prompt = "Give me a short introduction to large language model." messages = [ {"role": "system", "content": "You are an advanced AI language model specializing in solving math and programming problems step by step. Carefully analyze each part of the problem, verify the accuracy of your reasoning with relevant facts and data, and provide clear, logical solutions. Reflect on and review your approach throughout the problem-solving process to ensure precision and thoroughness. Always think through the problem step by step and provide your answers accordingly."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate( **model_inputs, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] print(response) ``` ## Future Plan Our pi0 serves as the foundation for ensuring that our data generation pipeline effectively leverages the long reasoning capabilities of large language models. Looking ahead, we plan to use pi0 as the initial policy checkpoint for reinforcement learning training. Through this process, we aim to significantly enhance the generalization of reasoning capabilities, particularly for tasks in the financial and medical domains, which are critical for both academic research and industrial applications. ## Contributor ### Supervisors Wei Chu • Yinghui Xu • Yuan Qi ### INF-o1 team **Listed in Alphabetical Order** Chao Qu - Team Leader • Chao Wang - Infrastructure • Cheng Peng - Data Pipeline (Logical) • Dakuan Lu - Data Pipeline (Science) • Haozhe Wang - Data Pipeline (Math) & RL • Hongqing Hu - Infrastructure • Jianming Feng - Data Pipeline (Safety) • Jiaran Hao - Data Pipeline (SQL) & Infrastructure • Kelang Tian - Infrastructure • Minghao Yang - Data Pipeline (Math) • Quanbin Wang - Data Pipeline (Safety) • J.K. Liu - Data Pipeline (SQL) • Tianchu Yao - Data Pipeline & Alignment • Weidi Xu - Data Pipeline (Logical) • Xiaoyu Tan - Data Pipeline & Alignment • Yihan Songliu - Infrastructure ## License Agreement infly-o1-pi0 support commercial applications under a permissive [License](https://huggingface.co/infly/inf-o1-pi0/blob/main/LICENSE). ## Contact Chao Qu: [email protected] Xiaoyu Tan: [email protected] ## Cititation If you find our work helpful, feel free to give us a cite. ``` @misc{inftech_pi_zero2024, author = {INF-o1 Team}, title = {INF-o1 (\(\pi_0\)): Initiating the Journey to the Infinity of LLM Reasoning}, year = {2024}, url = {https://inftech-pi-zero.github.io/}, note = {Accessed: 2024-12-31} } ```
LuckyLukke/grpo_turn_level_onesided_2_starter_change-1200
LuckyLukke
2025-04-30T11:36:14Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T11:33:20Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
LuckyLukke/grpo_turn_level_onesided_2_starter_change-1300
LuckyLukke
2025-04-30T11:36:05Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T11:33:18Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
annasoli/Qwen2.5-14B-Instruct_bad_med_dpR1_3x3_mixed-data-V3
annasoli
2025-04-30T11:35:58Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-30T11:27:23Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
LuckyLukke/grpo_turn_level_onesided_2_starter_change-1000
LuckyLukke
2025-04-30T11:32:05Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T11:29:13Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
LuckyLukke/grpo_turn_level_onesided_2_starter_change-800
LuckyLukke
2025-04-30T11:31:57Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T11:28:55Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
LuckyLukke/grpo_turn_level_onesided_2_starter_change-600
LuckyLukke
2025-04-30T11:31:25Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T11:28:35Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
LuckyLukke/grpo_turn_level_onesided_2_starter_change-100
LuckyLukke
2025-04-30T11:31:19Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T11:28:24Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ggml-org/pixtral-12b-GGUF
ggml-org
2025-04-30T11:30:26Z
513
1
null
[ "gguf", "base_model:mistral-community/pixtral-12b", "base_model:quantized:mistral-community/pixtral-12b", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-04-23T14:35:17Z
--- license: apache-2.0 base_model: mistral-community/pixtral-12b --- # pixtral-12b Original model: https://huggingface.co/mistral-community/pixtral-12b For more info, please refer to this PR: https://github.com/ggml-org/llama.cpp/pull/13065
Apel-sin/gemma-3-12b-it-qat-int4-unquantized-exl2
Apel-sin
2025-04-30T11:25:14Z
0
0
transformers
[ "transformers", "gemma3", "gemma", "google", "image-text-to-text", "arxiv:1905.07830", "arxiv:1905.10044", "arxiv:1911.11641", "arxiv:1904.09728", "arxiv:1705.03551", "arxiv:1911.01547", "arxiv:1907.10641", "arxiv:1903.00161", "arxiv:2009.03300", "arxiv:2304.06364", "arxiv:2103.03874", "arxiv:2110.14168", "arxiv:2311.12022", "arxiv:2108.07732", "arxiv:2107.03374", "arxiv:2210.03057", "arxiv:2106.03193", "arxiv:1910.11856", "arxiv:2502.12404", "arxiv:2502.21228", "arxiv:2404.16816", "arxiv:2104.12756", "arxiv:2311.16502", "arxiv:2203.10244", "arxiv:2404.12390", "arxiv:1810.12440", "arxiv:1908.02660", "arxiv:2312.11805", "base_model:google/gemma-3-12b-it-qat-int4-unquantized", "base_model:finetune:google/gemma-3-12b-it-qat-int4-unquantized", "license:gemma", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-04-30T11:24:26Z
--- base_model: google/gemma-3-12b-it-qat-int4-unquantized license: gemma tags: - gemma3 - gemma - google pipeline_tag: image-text-to-text library_name: transformers extra_gated_heading: Access Gemma on Hugging Face extra_gated_prompt: >- To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging Face and click below. Requests are processed immediately. extra_gated_button_content: Acknowledge license --- # Gemma 3 model card **Model Page**: [Gemma](https://ai.google.dev/gemma/docs/core) > [!Note] > This repository corresponds to the 12B **instruction-tuned** version of the Gemma 3 model using Quantization Aware Training (QAT). > > **The checkpoint in this repository is unquantized, please make sure to quantize with int4 with your favorite tool** > > Thanks to QAT, the model is able to preserve similar quality as `bfloat16` while significantly reducing the memory requirements > to load the model. **Resources and Technical Documentation**: * [Gemma 3 Technical Report][g3-tech-report] * [Responsible Generative AI Toolkit][rai-toolkit] * [Gemma on Kaggle][kaggle-gemma] * [Gemma on Vertex Model Garden][vertex-mg-gemma3] **Terms of Use**: [Terms][terms] **Authors**: Google DeepMind ## Model Information Summary description and brief definition of inputs and outputs. ### Description Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. Gemma 3 models are multimodal, handling text and image input and generating text output, with open weights for both pre-trained variants and instruction-tuned variants. Gemma 3 has a large, 128K context window, multilingual support in over 140 languages, and is available in more sizes than previous versions. Gemma 3 models are well-suited for a variety of text generation and image understanding tasks, including question answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as laptops, desktops or your own cloud infrastructure, democratizing access to state of the art AI models and helping foster innovation for everyone. ### Inputs and outputs - **Input:** - Text string, such as a question, a prompt, or a document to be summarized - Images, normalized to 896 x 896 resolution and encoded to 256 tokens each - Total input context of 128K tokens for the 4B, 12B, and 27B sizes, and 32K tokens for the 1B size - **Output:** - Generated text in response to the input, such as an answer to a question, analysis of image content, or a summary of a document - Total output context of 8192 tokens ### Citation ```none @article{gemma_2025, title={Gemma 3}, url={https://goo.gle/Gemma3Report}, publisher={Kaggle}, author={Gemma Team}, year={2025} } ``` ## Model Data Data used for model training and how the data was processed. ### Training Dataset These models were trained on a dataset of text data that includes a wide variety of sources. The 27B model was trained with 14 trillion tokens, the 12B model was trained with 12 trillion tokens, 4B model was trained with 4 trillion tokens and 1B with 2 trillion tokens. Here are the key components: - Web Documents: A diverse collection of web text ensures the model is exposed to a broad range of linguistic styles, topics, and vocabulary. The training dataset includes content in over 140 languages. - Code: Exposing the model to code helps it to learn the syntax and patterns of programming languages, which improves its ability to generate code and understand code-related questions. - Mathematics: Training on mathematical text helps the model learn logical reasoning, symbolic representation, and to address mathematical queries. - Images: A wide range of images enables the model to perform image analysis and visual data extraction tasks. The combination of these diverse data sources is crucial for training a powerful multimodal model that can handle a wide variety of different tasks and data formats. ### Data Preprocessing Here are the key data cleaning and filtering methods applied to the training data: - CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was applied at multiple stages in the data preparation process to ensure the exclusion of harmful and illegal content. - Sensitive Data Filtering: As part of making Gemma pre-trained models safe and reliable, automated techniques were used to filter out certain personal information and other sensitive data from training sets. - Additional methods: Filtering based on content quality and safety in line with [our policies][safety-policies]. ## Implementation Information Details about the model internals. ### Hardware Gemma was trained using [Tensor Processing Unit (TPU)][tpu] hardware (TPUv4p, TPUv5p and TPUv5e). Training vision-language models (VLMS) requires significant computational power. TPUs, designed specifically for matrix operations common in machine learning, offer several advantages in this domain: - Performance: TPUs are specifically designed to handle the massive computations involved in training VLMs. They can speed up training considerably compared to CPUs. - Memory: TPUs often come with large amounts of high-bandwidth memory, allowing for the handling of large models and batch sizes during training. This can lead to better model quality. - Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for handling the growing complexity of large foundation models. You can distribute training across multiple TPU devices for faster and more efficient processing. - Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective solution for training large models compared to CPU-based infrastructure, especially when considering the time and resources saved due to faster training. - These advantages are aligned with [Google's commitments to operate sustainably][sustainability]. ### Software Training was done using [JAX][jax] and [ML Pathways][ml-pathways]. JAX allows researchers to take advantage of the latest generation of hardware, including TPUs, for faster and more efficient training of large models. ML Pathways is Google's latest effort to build artificially intelligent systems capable of generalizing across multiple tasks. This is specially suitable for foundation models, including large language models like these ones. Together, JAX and ML Pathways are used as described in the [paper about the Gemini family of models][gemini-2-paper]; *"the 'single controller' programming model of Jax and Pathways allows a single Python process to orchestrate the entire training run, dramatically simplifying the development workflow."* ## Evaluation > [!Note] > The evaluation in this section correspond to the original checkpoint, not the QAT checkpoint. > Model evaluation metrics and results. ### Benchmark Results These models were evaluated against a large collection of different datasets and metrics to cover different aspects of text generation: #### Reasoning and factuality | Benchmark | Metric | Gemma 3 PT 1B | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B | | ------------------------------ |----------------|:--------------:|:-------------:|:--------------:|:--------------:| | [HellaSwag][hellaswag] | 10-shot | 62.3 | 77.2 | 84.2 | 85.6 | | [BoolQ][boolq] | 0-shot | 63.2 | 72.3 | 78.8 | 82.4 | | [PIQA][piqa] | 0-shot | 73.8 | 79.6 | 81.8 | 83.3 | | [SocialIQA][socialiqa] | 0-shot | 48.9 | 51.9 | 53.4 | 54.9 | | [TriviaQA][triviaqa] | 5-shot | 39.8 | 65.8 | 78.2 | 85.5 | | [Natural Questions][naturalq] | 5-shot | 9.48 | 20.0 | 31.4 | 36.1 | | [ARC-c][arc] | 25-shot | 38.4 | 56.2 | 68.9 | 70.6 | | [ARC-e][arc] | 0-shot | 73.0 | 82.4 | 88.3 | 89.0 | | [WinoGrande][winogrande] | 5-shot | 58.2 | 64.7 | 74.3 | 78.8 | | [BIG-Bench Hard][bbh] | few-shot | 28.4 | 50.9 | 72.6 | 77.7 | | [DROP][drop] | 1-shot | 42.4 | 60.1 | 72.2 | 77.2 | [hellaswag]: https://arxiv.org/abs/1905.07830 [boolq]: https://arxiv.org/abs/1905.10044 [piqa]: https://arxiv.org/abs/1911.11641 [socialiqa]: https://arxiv.org/abs/1904.09728 [triviaqa]: https://arxiv.org/abs/1705.03551 [naturalq]: https://github.com/google-research-datasets/natural-questions [arc]: https://arxiv.org/abs/1911.01547 [winogrande]: https://arxiv.org/abs/1907.10641 [bbh]: https://paperswithcode.com/dataset/bbh [drop]: https://arxiv.org/abs/1903.00161 #### STEM and code | Benchmark | Metric | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B | | ------------------------------ |----------------|:-------------:|:--------------:|:--------------:| | [MMLU][mmlu] | 5-shot | 59.6 | 74.5 | 78.6 | | [MMLU][mmlu] (Pro COT) | 5-shot | 29.2 | 45.3 | 52.2 | | [AGIEval][agieval] | 3-5-shot | 42.1 | 57.4 | 66.2 | | [MATH][math] | 4-shot | 24.2 | 43.3 | 50.0 | | [GSM8K][gsm8k] | 8-shot | 38.4 | 71.0 | 82.6 | | [GPQA][gpqa] | 5-shot | 15.0 | 25.4 | 24.3 | | [MBPP][mbpp] | 3-shot | 46.0 | 60.4 | 65.6 | | [HumanEval][humaneval] | 0-shot | 36.0 | 45.7 | 48.8 | [mmlu]: https://arxiv.org/abs/2009.03300 [agieval]: https://arxiv.org/abs/2304.06364 [math]: https://arxiv.org/abs/2103.03874 [gsm8k]: https://arxiv.org/abs/2110.14168 [gpqa]: https://arxiv.org/abs/2311.12022 [mbpp]: https://arxiv.org/abs/2108.07732 [humaneval]: https://arxiv.org/abs/2107.03374 #### Multilingual | Benchmark | Gemma 3 PT 1B | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B | | ------------------------------------ |:-------------:|:-------------:|:--------------:|:--------------:| | [MGSM][mgsm] | 2.04 | 34.7 | 64.3 | 74.3 | | [Global-MMLU-Lite][global-mmlu-lite] | 24.9 | 57.0 | 69.4 | 75.7 | | [WMT24++][wmt24pp] (ChrF) | 36.7 | 48.4 | 53.9 | 55.7 | | [FloRes][flores] | 29.5 | 39.2 | 46.0 | 48.8 | | [XQuAD][xquad] (all) | 43.9 | 68.0 | 74.5 | 76.8 | | [ECLeKTic][eclektic] | 4.69 | 11.0 | 17.2 | 24.4 | | [IndicGenBench][indicgenbench] | 41.4 | 57.2 | 61.7 | 63.4 | [mgsm]: https://arxiv.org/abs/2210.03057 [flores]: https://arxiv.org/abs/2106.03193 [xquad]: https://arxiv.org/abs/1910.11856v3 [global-mmlu-lite]: https://huggingface.co/datasets/CohereForAI/Global-MMLU-Lite [wmt24pp]: https://arxiv.org/abs/2502.12404v1 [eclektic]: https://arxiv.org/abs/2502.21228 [indicgenbench]: https://arxiv.org/abs/2404.16816 #### Multimodal | Benchmark | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B | | ------------------------------ |:-------------:|:--------------:|:--------------:| | [COCOcap][coco-cap] | 102 | 111 | 116 | | [DocVQA][docvqa] (val) | 72.8 | 82.3 | 85.6 | | [InfoVQA][info-vqa] (val) | 44.1 | 54.8 | 59.4 | | [MMMU][mmmu] (pt) | 39.2 | 50.3 | 56.1 | | [TextVQA][textvqa] (val) | 58.9 | 66.5 | 68.6 | | [RealWorldQA][realworldqa] | 45.5 | 52.2 | 53.9 | | [ReMI][remi] | 27.3 | 38.5 | 44.8 | | [AI2D][ai2d] | 63.2 | 75.2 | 79.0 | | [ChartQA][chartqa] | 63.6 | 74.7 | 76.3 | | [VQAv2][vqav2] | 63.9 | 71.2 | 72.9 | | [BLINK][blinkvqa] | 38.0 | 35.9 | 39.6 | | [OKVQA][okvqa] | 51.0 | 58.7 | 60.2 | | [TallyQA][tallyqa] | 42.5 | 51.8 | 54.3 | | [SpatialSense VQA][ss-vqa] | 50.9 | 60.0 | 59.4 | | [CountBenchQA][countbenchqa] | 26.1 | 17.8 | 68.0 | [coco-cap]: https://cocodataset.org/#home [docvqa]: https://www.docvqa.org/ [info-vqa]: https://arxiv.org/abs/2104.12756 [mmmu]: https://arxiv.org/abs/2311.16502 [textvqa]: https://textvqa.org/ [realworldqa]: https://paperswithcode.com/dataset/realworldqa [remi]: https://arxiv.org/html/2406.09175v1 [ai2d]: https://allenai.org/data/diagrams [chartqa]: https://arxiv.org/abs/2203.10244 [vqav2]: https://visualqa.org/index.html [blinkvqa]: https://arxiv.org/abs/2404.12390 [okvqa]: https://okvqa.allenai.org/ [tallyqa]: https://arxiv.org/abs/1810.12440 [ss-vqa]: https://arxiv.org/abs/1908.02660 [countbenchqa]: https://github.com/google-research/big_vision/blob/main/big_vision/datasets/countbenchqa/ ## Ethics and Safety Ethics and safety evaluation approach and results. ### Evaluation Approach Our evaluation methods include structured evaluations and internal red-teaming testing of relevant content policies. Red-teaming was conducted by a number of different teams, each with different goals and human evaluation metrics. These models were evaluated against a number of different categories relevant to ethics and safety, including: - **Child Safety**: Evaluation of text-to-text and image to text prompts covering child safety policies, including child sexual abuse and exploitation. - **Content Safety:** Evaluation of text-to-text and image to text prompts covering safety policies including, harassment, violence and gore, and hate speech. - **Representational Harms**: Evaluation of text-to-text and image to text prompts covering safety policies including bias, stereotyping, and harmful associations or inaccuracies. In addition to development level evaluations, we conduct "assurance evaluations" which are our 'arms-length' internal evaluations for responsibility governance decision making. They are conducted separately from the model development team, to inform decision making about release. High level findings are fed back to the model team, but prompt sets are held-out to prevent overfitting and preserve the results' ability to inform decision making. Assurance evaluation results are reported to our Responsibility & Safety Council as part of release review. ### Evaluation Results For all areas of safety testing, we saw major improvements in the categories of child safety, content safety, and representational harms relative to previous Gemma models. All testing was conducted without safety filters to evaluate the model capabilities and behaviors. For both text-to-text and image-to-text, and across all model sizes, the model produced minimal policy violations, and showed significant improvements over previous Gemma models' performance with respect to ungrounded inferences. A limitation of our evaluations was they included only English language prompts. ## Usage and Limitations These models have certain limitations that users should be aware of. ### Intended Usage Open vision-language models (VLMs) models have a wide range of applications across various industries and domains. The following list of potential uses is not comprehensive. The purpose of this list is to provide contextual information about the possible use-cases that the model creators considered as part of model training and development. - Content Creation and Communication - Text Generation: These models can be used to generate creative text formats such as poems, scripts, code, marketing copy, and email drafts. - Chatbots and Conversational AI: Power conversational interfaces for customer service, virtual assistants, or interactive applications. - Text Summarization: Generate concise summaries of a text corpus, research papers, or reports. - Image Data Extraction: These models can be used to extract, interpret, and summarize visual data for text communications. - Research and Education - Natural Language Processing (NLP) and VLM Research: These models can serve as a foundation for researchers to experiment with VLM and NLP techniques, develop algorithms, and contribute to the advancement of the field. - Language Learning Tools: Support interactive language learning experiences, aiding in grammar correction or providing writing practice. - Knowledge Exploration: Assist researchers in exploring large bodies of text by generating summaries or answering questions about specific topics. ### Limitations - Training Data - The quality and diversity of the training data significantly influence the model's capabilities. Biases or gaps in the training data can lead to limitations in the model's responses. - The scope of the training dataset determines the subject areas the model can handle effectively. - Context and Task Complexity - Models are better at tasks that can be framed with clear prompts and instructions. Open-ended or highly complex tasks might be challenging. - A model's performance can be influenced by the amount of context provided (longer context generally leads to better outputs, up to a certain point). - Language Ambiguity and Nuance - Natural language is inherently complex. Models might struggle to grasp subtle nuances, sarcasm, or figurative language. - Factual Accuracy - Models generate responses based on information they learned from their training datasets, but they are not knowledge bases. They may generate incorrect or outdated factual statements. - Common Sense - Models rely on statistical patterns in language. They might lack the ability to apply common sense reasoning in certain situations. ### Ethical Considerations and Risks The development of vision-language models (VLMs) raises several ethical concerns. In creating an open model, we have carefully considered the following: - Bias and Fairness - VLMs trained on large-scale, real-world text and image data can reflect socio-cultural biases embedded in the training material. These models underwent careful scrutiny, input data pre-processing described and posterior evaluations reported in this card. - Misinformation and Misuse - VLMs can be misused to generate text that is false, misleading, or harmful. - Guidelines are provided for responsible use with the model, see the [Responsible Generative AI Toolkit][rai-toolkit]. - Transparency and Accountability: - This model card summarizes details on the models' architecture, capabilities, limitations, and evaluation processes. - A responsibly developed open model offers the opportunity to share innovation by making VLM technology accessible to developers and researchers across the AI ecosystem. Risks identified and mitigations: - **Perpetuation of biases**: It's encouraged to perform continuous monitoring (using evaluation metrics, human review) and the exploration of de-biasing techniques during model training, fine-tuning, and other use cases. - **Generation of harmful content**: Mechanisms and guidelines for content safety are essential. Developers are encouraged to exercise caution and implement appropriate content safety safeguards based on their specific product policies and application use cases. - **Misuse for malicious purposes**: Technical limitations and developer and end-user education can help mitigate against malicious applications of VLMs. Educational resources and reporting mechanisms for users to flag misuse are provided. Prohibited uses of Gemma models are outlined in the [Gemma Prohibited Use Policy][prohibited-use]. - **Privacy violations**: Models were trained on data filtered for removal of certain personal information and other sensitive data. Developers are encouraged to adhere to privacy regulations with privacy-preserving techniques. ### Benefits At the time of release, this family of models provides high-performance open vision-language model implementations designed from the ground up for responsible AI development compared to similarly sized models. Using the benchmark evaluation metrics described in this document, these models have shown to provide superior performance to other, comparably-sized open model alternatives. [g3-tech-report]: https://goo.gle/Gemma3Report [rai-toolkit]: https://ai.google.dev/responsible [kaggle-gemma]: https://www.kaggle.com/models/google/gemma-3 [vertex-mg-gemma3]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/gemma3 [terms]: https://ai.google.dev/gemma/terms [safety-policies]: https://ai.google/static/documents/ai-responsibility-update-published-february-2025.pdf [prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy [tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu [sustainability]: https://sustainability.google/operating-sustainably/ [jax]: https://github.com/jax-ml/jax [ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/ [sustainability]: https://sustainability.google/operating-sustainably/ [gemini-2-paper]: https://arxiv.org/abs/2312.11805
Prince53/deep-speech-detection
Prince53
2025-04-30T11:17:09Z
0
0
tf-keras
[ "tf-keras", "audio-classification", "deep-speech-detection", "tensorflow", "keras", "license:apache-2.0", "region:us" ]
audio-classification
2025-04-30T11:04:41Z
--- license: apache-2.0 tags: - audio-classification - deep-speech-detection - tensorflow - keras --- # Model Card for Deep Speech Detection ## Model Description This is a TensorFlow/Keras CNN model trained to detect deepfake or synthetic speech with >95% accuracy. It uses audio features (MFCCs, chroma, spectral centroid, etc.) extracted with `librosa`. ## Intended Use - Deepfake speech detection - Audio authenticity verification ## Dependencies ```bash pip install tensorflow==2.10.0 librosa==0.10.1 joblib==1.3.2 numpy==1.22.4 pandas==1.5.3 scikit-learn==1.2.2 ``` ## Usage ```python import tensorflow as tf import librosa import joblib import numpy as np import pandas as pd from huggingface_hub import hf_hub_download, HfApi import os # Download model and files repo_name = "Prince53/deep-speech-detection" model_dir = "downloaded_model" scaler_path = hf_hub_download(repo_name, "scaler.pkl", local_dir=model_dir) label_encoder_path = hf_hub_download(repo_name, "label_encoder.pkl", local_dir=model_dir) api = HfApi() api.snapshot_download(repo_name, local_dir=model_dir, allow_patterns="saved_model/*") # Load model and preprocessing objects model = tf.keras.models.load_model(os.path.join(model_dir, "saved_model")) scaler = joblib.load(scaler_path) label_encoder = joblib.load(label_encoder_path) # Feature extraction function def segment_and_extract_features(audio, sr=16000): segment_samples = int(2.0 * sr) step_samples = int(0.25 * sr) segments = [audio[i:i+segment_samples] for i in range(0, len(audio) - segment_samples + 1, step_samples)] features = [] for segment in segments: if len(segment) < segment_samples: continue mfccs = librosa.feature.mfcc(y=segment, sr=sr, n_mfcc=13) chroma = librosa.feature.chroma_stft(y=segment, sr=sr) spectral_centroid = librosa.feature.spectral_centroid(y=segment, sr=sr) spectral_bandwidth = librosa.feature.spectral_bandwidth(y=segment, sr=sr) rolloff = librosa.feature.spectral_rolloff(y=segment, sr=sr) zero_crossing_rate = librosa.feature.zero_crossing_rate(y=segment) feature_dict = { 'mfcc_mean': np.mean(mfccs, axis=1), 'mfcc_std': np.std(mfccs, axis=1), 'chroma': np.mean(chroma, axis=1), 'spectral_centroid': np.mean(spectral_centroid), 'spectral_bandwidth': np.mean(spectral_bandwidth), 'rolloff': np.mean(rolloff), 'zero_crossing_rate': np.mean(zero_crossing_rate) } features.append(feature_dict) return features # Classify audio audio, sr = librosa.load("path/to/audio.wav", sr=16000) segments = segment_and_extract_features(audio, sr) segment_features = pd.concat([ pd.DataFrame([seg['mfcc_mean'] for seg in segments]), pd.DataFrame([seg['mfcc_std'] for seg in segments]), pd.DataFrame([seg['chroma'] for seg in segments]), pd.DataFrame([[seg['spectral_centroid'], seg['spectral_bandwidth'], seg['rolloff'], seg['zero_crossing_rate']] for seg in segments]) ], axis=1) segment_features = scaler.transform(segment_features) segment_features = segment_features.reshape(segment_features.shape[0], segment_features.shape[1], 1) predictions = model.predict(segment_features) segment_labels = np.argmax(predictions, axis=1) confidence_scores = np.mean(predictions, axis=0) final_label = label_encoder.inverse_transform([np.argmax(np.bincount(segment_labels))])[0] print(f"Confidence Scores: Real={confidence_scores[0]:.4f}, Fake={confidence_scores[1]:.4f}") print(f"Classification: {final_label} ({0 if final_label == 'Real' else 1})") ``` ## Limitations - Requires mono audio at 16kHz sampling rate. - May struggle with low-quality audio or unseen domains. - Trained on the Comb4 dataset. ## Training Data - Dataset: Comb4 (custom dataset with real and fake audio) - Size: [Update with number of samples] ## Evaluation - Test Accuracy: [Update with >95%]
ggml-org/SmolVLM2-256M-Video-Instruct-GGUF
ggml-org
2025-04-30T11:14:32Z
148
2
null
[ "gguf", "base_model:HuggingFaceTB/SmolVLM2-256M-Video-Instruct", "base_model:quantized:HuggingFaceTB/SmolVLM2-256M-Video-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-04-21T19:06:05Z
--- license: apache-2.0 base_model: HuggingFaceTB/SmolVLM2-256M-Video-Instruct --- # SmolVLM2-256M-Video-Instruct Original model: https://huggingface.co/HuggingFaceTB/SmolVLM2-256M-Video-Instruct For more info, please refer to this PR: https://github.com/ggml-org/llama.cpp/pull/13050
ggml-org/SmolVLM2-2.2B-Instruct-GGUF
ggml-org
2025-04-30T11:14:04Z
385
1
null
[ "gguf", "base_model:HuggingFaceTB/SmolVLM2-2.2B-Instruct", "base_model:quantized:HuggingFaceTB/SmolVLM2-2.2B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-04-21T19:03:24Z
--- license: apache-2.0 base_model: HuggingFaceTB/SmolVLM2-2.2B-Instruct --- # SmolVLM2-2.2B-Instruct Original model: https://huggingface.co/HuggingFaceTB/SmolVLM2-2.2B-Instruct For more info, please refer to this PR: https://github.com/ggml-org/llama.cpp/pull/13050
Yutao-Zhou/SmolLM2-FT-MyDataset
Yutao-Zhou
2025-04-30T11:11:37Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "llama", "text-generation", "generated_from_trainer", "smol-course", "module_1", "trl", "sft", "conversational", "base_model:HuggingFaceTB/SmolLM2-135M", "base_model:finetune:HuggingFaceTB/SmolLM2-135M", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T11:11:14Z
--- base_model: HuggingFaceTB/SmolLM2-135M library_name: transformers model_name: SmolLM2-FT-MyDataset tags: - generated_from_trainer - smol-course - module_1 - trl - sft licence: license --- # Model Card for SmolLM2-FT-MyDataset This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M](https://huggingface.co/HuggingFaceTB/SmolLM2-135M). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Yutao-Zhou/SmolLM2-FT-MyDataset", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/zyt861107796-the-university-of-melbourne/huggingface/runs/a9izsrlw) This model was trained with SFT. ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.6.0+cu124 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
mradermacher/M1NDB0T-1111-14B-i1-GGUF
mradermacher
2025-04-30T11:10:31Z
0
0
transformers
[ "transformers", "gguf", "mindbot", "synthetic-entity", "agi-companion", "digital-human", "llama-factory", "qwen3-14b", "mindexpander", "en", "base_model:TheMindExpansionNetwork/M1NDB0T-1111-14B", "base_model:quantized:TheMindExpansionNetwork/M1NDB0T-1111-14B", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-04-30T02:50:44Z
--- base_model: TheMindExpansionNetwork/M1NDB0T-1111-14B language: - en library_name: transformers quantized_by: mradermacher tags: - mindbot - synthetic-entity - agi-companion - digital-human - llama-factory - qwen3-14b - mindexpander --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/TheMindExpansionNetwork/M1NDB0T-1111-14B <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/M1NDB0T-1111-14B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/M1NDB0T-1111-14B-i1-GGUF/resolve/main/M1NDB0T-1111-14B.i1-IQ1_S.gguf) | i1-IQ1_S | 3.7 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/M1NDB0T-1111-14B-i1-GGUF/resolve/main/M1NDB0T-1111-14B.i1-IQ1_M.gguf) | i1-IQ1_M | 3.9 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/M1NDB0T-1111-14B-i1-GGUF/resolve/main/M1NDB0T-1111-14B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/M1NDB0T-1111-14B-i1-GGUF/resolve/main/M1NDB0T-1111-14B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/M1NDB0T-1111-14B-i1-GGUF/resolve/main/M1NDB0T-1111-14B.i1-IQ2_S.gguf) | i1-IQ2_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/M1NDB0T-1111-14B-i1-GGUF/resolve/main/M1NDB0T-1111-14B.i1-IQ2_M.gguf) | i1-IQ2_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/M1NDB0T-1111-14B-i1-GGUF/resolve/main/M1NDB0T-1111-14B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 5.5 | very low quality | | [GGUF](https://huggingface.co/mradermacher/M1NDB0T-1111-14B-i1-GGUF/resolve/main/M1NDB0T-1111-14B.i1-Q2_K.gguf) | i1-Q2_K | 5.9 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/M1NDB0T-1111-14B-i1-GGUF/resolve/main/M1NDB0T-1111-14B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 6.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/M1NDB0T-1111-14B-i1-GGUF/resolve/main/M1NDB0T-1111-14B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 6.5 | | | [GGUF](https://huggingface.co/mradermacher/M1NDB0T-1111-14B-i1-GGUF/resolve/main/M1NDB0T-1111-14B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 6.8 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/M1NDB0T-1111-14B-i1-GGUF/resolve/main/M1NDB0T-1111-14B.i1-IQ3_S.gguf) | i1-IQ3_S | 6.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/M1NDB0T-1111-14B-i1-GGUF/resolve/main/M1NDB0T-1111-14B.i1-IQ3_M.gguf) | i1-IQ3_M | 7.0 | | | [GGUF](https://huggingface.co/mradermacher/M1NDB0T-1111-14B-i1-GGUF/resolve/main/M1NDB0T-1111-14B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 7.4 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/M1NDB0T-1111-14B-i1-GGUF/resolve/main/M1NDB0T-1111-14B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 8.0 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/M1NDB0T-1111-14B-i1-GGUF/resolve/main/M1NDB0T-1111-14B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 8.2 | | | [GGUF](https://huggingface.co/mradermacher/M1NDB0T-1111-14B-i1-GGUF/resolve/main/M1NDB0T-1111-14B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 8.6 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/M1NDB0T-1111-14B-i1-GGUF/resolve/main/M1NDB0T-1111-14B.i1-Q4_0.gguf) | i1-Q4_0 | 8.6 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/M1NDB0T-1111-14B-i1-GGUF/resolve/main/M1NDB0T-1111-14B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 8.7 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/M1NDB0T-1111-14B-i1-GGUF/resolve/main/M1NDB0T-1111-14B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 9.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/M1NDB0T-1111-14B-i1-GGUF/resolve/main/M1NDB0T-1111-14B.i1-Q4_1.gguf) | i1-Q4_1 | 9.5 | | | [GGUF](https://huggingface.co/mradermacher/M1NDB0T-1111-14B-i1-GGUF/resolve/main/M1NDB0T-1111-14B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 10.4 | | | [GGUF](https://huggingface.co/mradermacher/M1NDB0T-1111-14B-i1-GGUF/resolve/main/M1NDB0T-1111-14B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 10.6 | | | [GGUF](https://huggingface.co/mradermacher/M1NDB0T-1111-14B-i1-GGUF/resolve/main/M1NDB0T-1111-14B.i1-Q6_K.gguf) | i1-Q6_K | 12.2 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
satonu0308/distilbert-base-uncased-finetuned-fake-or-real-news
satonu0308
2025-04-30T11:08:54Z
7
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-04-24T11:11:00Z
--- library_name: transformers license: apache-2.0 base_model: distilbert/distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased-finetuned-fake-or-real-news results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-fake-or-real-news This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0007 - Accuracy: 0.9998 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Tokenizers 0.21.1
rinabuoy/nllb-200-600M-2Ways-No-GG-Pairs-v11-Reg
rinabuoy
2025-04-30T11:08:53Z
0
0
transformers
[ "transformers", "safetensors", "m2m_100", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2025-04-30T11:05:57Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
18-Samiya-Hijab-Viral-Videos/NEW.EXCLUSIVE.TRENDING.CLIP.Samiya.Hijab.Viral.Video.Link
18-Samiya-Hijab-Viral-Videos
2025-04-30T11:07:32Z
0
0
null
[ "region:us" ]
null
2025-04-30T11:06:11Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/24tm3bsa?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> L𝚎aked V𝚒deo Actor Samiya Hijab Original V𝚒deo V𝚒ral V𝚒deo L𝚎aked on X Twitter Actor Samiya Hijab Original V𝚒deo V𝚒deo oficial twitter L𝚎aked V𝚒deo Actor Samiya Hijab Original V𝚒deo V𝚒ral V𝚒deo L𝚎aked on X Twitter.. L𝚎aked V𝚒ral l𝚒nk 2025 L𝚎aked V𝚒deo XnX V𝚒ral L𝚎aked V𝚒ral l𝚒nk Samiya Hijab V𝚒ral V𝚒deo L𝚎aked on X Twitter latest Samiya Hijab L𝚎aked V𝚒deo V𝚒ral On Social Media Kompoz Me L𝚎aked Com Scoop Big Xn𝚇X Celebrity
a-mannion/umls-kgi-bert-es
a-mannion
2025-04-30T11:07:10Z
14
0
transformers
[ "transformers", "pytorch", "distilbert", "feature-extraction", "medical", "fill-mask", "es", "arxiv:2307.11170", "license:apache-2.0", "endpoints_compatible", "region:us" ]
fill-mask
2023-11-13T16:43:39Z
--- license: apache-2.0 language: - es tags: - medical pipeline_tag: fill-mask --- # UMLS-KGI-BERT-ES <!-- Provide a quick summary of what the model is/does. --> This is a BERT encoder trained on the Spanish-language section of the European Clinical Case corpus as well as the UMLS metathesaurus knowledge graph, as described in [this paper](https://aclanthology.org/2023.clinicalnlp-1.35/). The training corpus consists of a custom combination of clinical documents from the E3C and text sequences derived from the metathesaurus (see our [Github repo](https://github.com/ap-mannion/bertify-umls) for more details). ## Model Details This model was trained using a multi-task approach combining Masked Language Modelling with knowledge-graph-based classification/fill-mask type objectives. The idea behind this framework was to try to improve the robustness of specialised biomedical BERT models by having them learn from structured data as well as natural language, while remaining in the cross-entropy-based learning paradigm. - **Developed by:** Aidan Mannion - **Funded by :** GENCI-IDRIS grant AD011013535R1 - **Model type:** DistilBERT - **Language(s) (NLP):** Spanish For further details on the model architecture, training objectives, hardware \& software used, as well as the preliminary downstream evaluation experiments carried out, refer to the [ArXiv paper](https://arxiv.org/abs/2307.11170). ### UMLS-KGI Models | **Model** | **Model Repo** | **Dataset Size** | **Base Architecture** | **Base Model** | **Total KGI training steps** | |:--------------------------:|:--------------------------------------------------------------------------:|:----------------:|:---------------------:|:---------------------------------------------------------------------------------------------:|:----------------------------:| | UMLS-KGI-BERT-multilingual | [url-multi](https://huggingface.co/ap-mannion/umls-kgi-bert-multilingual) | 940MB | DistilBERT | n/a | 163,904 | | UMLS-KGI-BERT-FR | [url-fr](https://huggingface.co/ap-mannion/umls-kgi-bert-fr) | 604MB | DistilBERT | n/a | 126,720 | | UMLS-KGI-BERT-EN | [url-en](https://huggingface.co/ap-mannion/umls-kgi-bert-en) | 174MB | DistilBERT | n/a | 19,008 | | UMLS-KGI-BERT-ES | [url-es](https://huggingface.co/ap-mannion/umls-kgi-bert-es) | 162MB | DistilBERT | n/a | 18,176 | | DrBERT-UMLS-KGI | [url-drbert](https://huggingface.co/ap-mannion/drbert-umls-kgi) | 604MB | CamemBERT/RoBERTa | [DrBERT-4GB](https://huggingface.co/Dr-BERT/DrBERT-4GB) | 126,720 | | PubMedBERT-UMLS-KGI | [url-pubmedbert](https://huggingface.co/ap-mannion/pubmedbert-umls-kgi) | 174MB | BERT | microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract | 19,008 | | BioRoBERTa-ES-UMLS-KGI | [url-bioroberta](https://huggingface.co/ap-mannion/bioroberta-es-umls-kgi) | 162MB | RoBERTa | [RoBERTa-base-biomedical-es](https://huggingface.co/PlanTL-GOB-ES/roberta-base-biomedical-es) | 18,176 | ### Direct/Downstream Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> This model is intended for use in experimental clinical/biomedical NLP work, either as a part of a larger system requiring text encoding or fine-tuned on a specific downstream task requiring clinical language modelling. It has **not** been sufficiently tested for accuracy, robustness and bias to be used in production settings. ### Out-of-Scope Use Experiments on general-domain data suggest that, given it's specialised training corpus, this model is **not** suitable for use on out-of-domain NLP tasks, and we recommend that it only be used for processing clinical text. ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> - [European Clinical Case Corpus](https://live.european-language-grid.eu/catalogue/corpus/7618) - [UMLS Metathesaurus](https://www.nlm.nih.gov/research/umls/index.html) #### Training Hyperparameters - sequence length: 256 - learning rate 7.5e-5 - linear learning rate schedule with 10,770 warmup steps - effective batch size 1500 (15 sequences per batch x 100 gradient accumulation steps) - MLM masking probability 0.15 **Training regime:** The model was trained with fp16 non-mixed precision, using the AdamW optimizer with default parameters. ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] ## Citation [BibTeX] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> ``` @inproceedings{mannion-etal-2023-umls, title = "{UMLS}-{KGI}-{BERT}: Data-Centric Knowledge Integration in Transformers for Biomedical Entity Recognition", author = "Mannion, Aidan and Schwab, Didier and Goeuriot, Lorraine", booktitle = "Proceedings of the 5th Clinical Natural Language Processing Workshop", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.clinicalnlp-1.35", pages = "312--322", abstract = "Pre-trained transformer language models (LMs) have in recent years become the dominant paradigm in applied NLP. These models have achieved state-of-the-art performance on tasks such as information extraction, question answering, sentiment analysis, document classification and many others. In the biomedical domain, significant progress has been made in adapting this paradigm to NLP tasks that require the integration of domain-specific knowledge as well as statistical modelling of language. In particular, research in this area has focused on the question of how best to construct LMs that take into account not only the patterns of token distribution in medical text, but also the wealth of structured information contained in terminology resources such as the UMLS. This work contributes a data-centric paradigm for enriching the language representations of biomedical transformer-encoder LMs by extracting text sequences from the UMLS.This allows for graph-based learning objectives to be combined with masked-language pre-training. Preliminary results from experiments in the extension of pre-trained LMs as well as training from scratch show that this framework improves downstream performance on multiple biomedical and clinical Named Entity Recognition (NER) tasks. All pre-trained models, data processing pipelines and evaluation scripts will be made publicly available.", } ``` ``` @misc{mannion2023umlskgibert, title={UMLS-KGI-BERT: Data-Centric Knowledge Integration in Transformers for Biomedical Entity Recognition}, author={Aidan Mannion and Thierry Chevalier and Didier Schwab and Lorraine Geouriot}, year={2023}, eprint={2307.11170}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
SalomonMetre13/nllb-fra-shr-mt-v2
SalomonMetre13
2025-04-30T11:03:40Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "m2m_100", "text2text-generation", "generated_from_trainer", "base_model:facebook/nllb-200-distilled-600M", "base_model:finetune:facebook/nllb-200-distilled-600M", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2025-04-30T10:19:54Z
--- library_name: transformers license: cc-by-nc-4.0 base_model: facebook/nllb-200-distilled-600M tags: - generated_from_trainer model-index: - name: nllb-fra-shr-mt-v2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # nllb-fra-shr-mt-v2 This model is a fine-tuned version of [facebook/nllb-200-distilled-600M](https://huggingface.co/facebook/nllb-200-distilled-600M) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 7.2623 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 48 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 200 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.1671 | 100 | 8.7837 | | No log | 0.3342 | 200 | 7.9479 | | No log | 0.5013 | 300 | 7.5596 | | No log | 0.6683 | 400 | 7.3522 | | 8.2614 | 0.8354 | 500 | 7.2623 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.1 - Tokenizers 0.21.1
RabotniKuma/Fast-Math-Qwen3-14B
RabotniKuma
2025-04-30T11:02:57Z
0
0
null
[ "safetensors", "qwen3", "base_model:Qwen/Qwen3-14B", "base_model:finetune:Qwen/Qwen3-14B", "license:apache-2.0", "region:us" ]
null
2025-04-30T04:36:38Z
--- license: apache-2.0 base_model: - Qwen/Qwen3-14B --- # Fast-Math-Qwen3-14B By applying SFT and GRPO on difficult math problems, we enhanced the performance of `DeepSeek-R1-Distill-Qwen-14B` and developed [`Fast-Math-R1-14B`](https://huggingface.co/RabotniKuma/Fast-Math-R1-14B), which achieves approx. 30% faster inference on average, while maintaining accuracy. In addition, we trained and open-sourced `Fast-Math-Qwen3-14B`, an efficiency-optimized version of Qwen3-14B`, following the same approach. **Compared to Qwen3-14B, this model enables approx. 65% faster inference on average, with minimal loss in performance.** Technical details can be found in [our github repository](https://github.com/analokmaus/kaggle-aimo2-fast-math-r1/tree/master). **Note:** This model likely inherits the ability to perform inference in TIR mode from the original model. However, all of our experiments were conducted in CoT mode, and its performance in TIR mode has not been evaluated. ## Evaluation <img src='https://github.com/analokmaus/kaggle-aimo2-fast-math-r1/blob/master/assets/pass1_aime_all.png?raw=true' max-height='400px'> | | | AIME 2024 | | AIME 2025 | | | ------------------- | ------------ | ---------------- | ------------------ | ---------------- | ------------------ | | Model | Token budget | Pass@1 (avg. 64) | Mean output tokens | Pass@1 (avg. 64) | Mean output tokens | | Qwen3-14B | 32000 | 79.3 | 13669 | 69.5 | 16481 | | | 24000 | 75.9 | 13168 | 65.6 | 15235 | | | 16000 | 64.5 | 11351 | 50.4 | 12522 | | | 12000 | 49.7 | 9746 | 36.3 | 10353 | | | 8000 | 28.4 | 7374 | 19.5 | 7485 | | Fast-Math-Qwen3-14B | 32000 | 77.6 | 9740 | 66.6 | 12281 | | | 24000 | 76.5 | 9634 | 65.3 | 11847 | | | 16000 | 72.6 | 8793 | 60.1 | 10195 | | | 12000 | 65.1 | 7775 | 49.4 | 8733 | | | 8000 | 50.7 | 6260 | 36 | 6618 | # Inference ## vLLM ```python from vllm import LLM, SamplingParams from transformers import AutoTokenizer model_path = 'RabotniKuma/Fast-Math-Qwen3-14B' vllm_engine = LLM( model=model_path, max_model_len=16000, gpu_memory_utilization=0.9, trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained(model_path) sampling_params = SamplingParams( temperature=1.0, top_p=0.90, min_p=0.05, max_tokens=8192, stop='</think>', # For even faster inference, applying early stopping at the </think> tag and extracting the final boxed content is recommended. ) messages = [ { 'role': 'user', 'content': ( 'Solve the problem, and put the answer in \boxed{{}}. ' 'Sarah is twice as old as her youngest brother. If the difference between their ages is 15 years. How old is her youngest brother?' ) } ] messages = tokenizer.apply_chat_template( conversation=messages, tokenize=False, add_generation_prompt=True ) response = vllm_engine.generate(messages, sampling_params=sampling_params) ```
RabotniKuma/Fast-Math-R1-14B
RabotniKuma
2025-04-30T11:00:51Z
36
3
null
[ "safetensors", "qwen2", "base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-14B", "base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-14B", "license:apache-2.0", "region:us" ]
null
2025-04-11T07:58:55Z
--- license: apache-2.0 base_model: - deepseek-ai/DeepSeek-R1-Distill-Qwen-14B --- # Kaggle AI Mathematical Olympiad - Progress Prize 2 - 9th Place Solution (Fast-Math-R1-14B) ## Team - Hiroshi Yoshihara @ [Aillis Inc.](https://aillis.jp/en), [The Univ. of Tokyo](https://publichealth.f.u-tokyo.ac.jp/#page_home) - Yuichi Inoue @ [Sakana AI](https://sakana.ai) - Taiki Yamaguchi @ [Rist Inc.](https://www.rist.co.jp/en/) # Summary By applying SFT and GRPO on difficult math problems, we enhanced the performance of `DeepSeek-R1-Distill-Qwen-14B` and developed `Fast-Math-R1-14B`, which achieves up to 60% (on average approx. 30%) faster inference while maintaining accuracy. Technical details can be found in [Kaggle Discussion](https://www.kaggle.com/competitions/ai-mathematical-olympiad-progress-prize-2/discussion/571252) and [Github](https://github.com/analokmaus/kaggle-aimo2-fast-math-r1). # Evaluation <img src="https://github.com/analokmaus/kaggle-aimo2-fast-math-r1/blob/master/assets/pass1_aime_all.png?raw=true" max-height="400px"> ## DS-R1-Qwen-14B vs Fast-Math-R1-14B (Ours) | | | AIME 2024 | | AIME 2025 | | | ---------------------------- | ------------ | ---------------- | ------------------ | ---------------- | ------------------ | | Model | Token budget | Pass@1 (avg. 64) | Mean output tokens | Pass@1 (avg. 64) | Mean output tokens | | DeepSeek-R1-Distill-Qwen-14B | 32000 | 66.9 | 11026 | 49.9 | 12310 | | | 24000 | 65.7 | 10784 | 49.7 | 11978 | | | 16000 | 61 | 9708 | 46.2 | 10567 | | | 12000 | 53.7 | 8472 | 39.9 | 9008 | | | 8000 | 41.8 | 6587 | 31.1 | 6788 | | Fast-Math-R1-14B | 32000 | 68 | 8217 | 49.6 | 9663 | | | 24000 | 67.9 | 8209 | 49.6 | 9627 | | | 16000 | 66.7 | 8017 | 48.4 | 9083 | | | 12000 | 61.9 | 7362 | 45.2 | 8048 | | | 8000 | 51.4 | 5939 | 36.3 | 6174 | ## OpenMath-Nemotron-14B vs Fast-OpenMath-Nemotron-14B (Ours) | | | AIME 2024 | | AIME 2025 | | | -------------------------- | ------------ | ---------------- | ------------------ | ---------------- | ------------------ | | Model | Token budget | Pass@1 (avg. 64) | Mean output tokens | Pass@1 (avg. 64) | Mean output tokens | | OpenMath-Nemotron-14B | 32000 | 76.2 | 11493 | 64.5 | 13414 | | | 24000 | 75.4 | 11417 | 63.4 | 13046 | | | 16000 | 66 | 10399 | 54.2 | 11422 | | | 12000 | 55 | 9053 | 40 | 9609 | | | 8000 | 36 | 6978 | 27.2 | 7083 | | [Fast-OpenMath-Nemotron-14B](https://huggingface.co/RabotniKuma/Fast-OpenMath-Nemotron-14B) | 32000 | 70.7 | 9603 | 61.4 | 11424 | | | 24000 | 70.6 | 9567 | 60.9 | 11271 | | | 16000 | 66.6 | 8954 | 55.3 | 10190 | | | 12000 | 59.4 | 7927 | 45.6 | 8752 | | | 8000 | 47.6 | 6282 | 33.8 | 6589 | ## Qwen3-14B vs Fast-Math-Qwen-14B | | | AIME 2024 | | AIME 2025 | | | ------------------- | ------------ | ---------------- | ------------------ | ---------------- | ------------------ | | Model | Token budget | Pass@1 (avg. 64) | Mean output tokens | Pass@1 (avg. 64) | Mean output tokens | | Qwen3-14B | 32000 | 79.3 | 13669 | 69.5 | 16481 | | | 24000 | 75.9 | 13168 | 65.6 | 15235 | | | 16000 | 64.5 | 11351 | 50.4 | 12522 | | | 12000 | 49.7 | 9746 | 36.3 | 10353 | | | 8000 | 28.4 | 7374 | 19.5 | 7485 | | [Fast-Math-Qwen3-14B](https://huggingface.co/RabotniKuma/Fast-Math-Qwen3-14B) | 32000 | 77.6 | 9740 | 66.6 | 12281 | | | 24000 | 76.5 | 9634 | 65.3 | 11847 | | | 16000 | 72.6 | 8793 | 60.1 | 10195 | | | 12000 | 65.1 | 7775 | 49.4 | 8733 | | | 8000 | 50.7 | 6260 | 36 | 6618 | # Dataset - [Our first stage SFT dataset](https://huggingface.co/datasets/RabotniKuma/Fast-Math-R1-SFT) - [Our second stage GRPO dataset](https://huggingface.co/datasets/RabotniKuma/Fast-Math-R1-GRPO) # Inference ## vLLM ```python from vllm import LLM, SamplingParams from transformers import AutoTokenizer model_path = 'RabotniKuma/Fast-Math-R1-14B' vllm_engine = LLM( model=model_path, max_model_len=8192, gpu_memory_utilization=0.9, trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained(model_path) sampling_params = SamplingParams( temperature=1.0, top_p=0.90, min_p=0.05, max_tokens=8192, stop='</think>', # For even faster inference, applying early stopping at the </think> tag and extracting the final boxed content is recommended. ) messages = [ { 'role': 'user', 'content': ( 'Solve the problem, and put the answer in \boxed{{}}. ' 'Sarah is twice as old as her youngest brother. If the difference between their ages is 15 years. How old is her youngest brother?' ) } ] messages = tokenizer.apply_chat_template( conversation=messages, tokenize=False, add_generation_prompt=True ) response = vllm_engine.generate(messages, sampling_params=sampling_params) ```
woonstadrotterdam/woningwaardering-llama3-8b-4bit-v1
woonstadrotterdam
2025-04-30T11:00:49Z
3
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "en", "dataset:woonstadrotterdam/woningwaarderingen", "base_model:meta-llama/Llama-3.1-8B", "base_model:adapter:meta-llama/Llama-3.1-8B", "license:llama3.1", "model-index", "region:us" ]
null
2025-04-24T07:20:44Z
--- library_name: peft license: llama3.1 base_model: meta-llama/Meta-Llama-3.1-8B tags: - trl - sft - generated_from_trainer model-index: - name: woningwaardering-llama3-8b-4bit-v1 results: - task: name: Woningwaardering type: text_generation description: Generate a woningwaardering for a dwelling based on a short description of the dwelling. metrics: - name: MAE type: mae value: 3.6 - name: MAPE type: mape value: 2.3 datasets: - woonstadrotterdam/woningwaarderingen language: - en --- # woningwaardering-llama3-8b-4bit-v1 This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B) on [woonstadrotterdam/woningwaarderingen](https://huggingface.co/datasets/woonstadrotterdam/woningwaarderingen). Inspired by [Ed Donner's price model](https://huggingface.co/ed-donner/pricer-2024-09-13_13.04.39) to predict Amazon product prices. > [!NOTE] > How many points for this dwelling? > > This is an apartment from 1992 with 5 rooms of which 2 are bedrooms. Its surface area is 64m² and its outdoor area is 4m². The energy label is A. The property value is €223k. > > Points: 153 ## Model description Model is trained to predict the _woningwaardering_ points of a dwelling based on a short description of the dwelling. ## Intended uses & limitations This model is intended for educational and research purposes. However, practical use cases can be imagined. For example, estimates can be made for dwellings based on a short description of the dwelling on a real estate website. Its main limitation is that is has been trained on a fixed format of dwelling descriptions, and may not generalise to other formats. For its other limitations, see the limitations of the [dataset](https://huggingface.co/datasets/woonstadrotterdam/woningwaarderingen) it was trained on. ## Training and evaluation data See [woonstadrotterdam/woningwaarderingen](https://huggingface.co/datasets/woonstadrotterdam/woningwaarderingen) for the train, validation and test data. ## Training procedure See _scripts/training.ipynb_ ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 1 - seed: 42 - optimizer: Use OptimizerNames.PAGED_ADAMW with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 7 ### Framework versions - PEFT 0.14.0 - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenisers 0.21.1 ## Evaluation See _scripts/evaluation.ipynb_ MAE and MAPE are chosen as the key metrics for evaluation as they are the most easily interpretable metrics for non-data scientists. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/67c711992ff91f25cbea0dcf/kDgQsyKTFOJyddAk3ZT6r.png)
MaestrAI/emma-lora-1746008274
MaestrAI
2025-04-30T10:57:18Z
0
0
null
[ "region:us" ]
null
2025-04-30T10:17:53Z
# emma LORA Model This is a LORA model for character Emma Created at 2025-04-30 12:17:54
ninja75/gemma2b-elon-merged
ninja75
2025-04-30T10:53:33Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-30T10:48:05Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
tomaarsen/wikipedia-tf-idf-bow
tomaarsen
2025-04-30T10:44:08Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:5749", "loss:CosineSimilarityLoss", "en", "dataset:sentence-transformers/stsb", "arxiv:1908.10084", "model-index", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-04-30T10:44:00Z
--- language: - en tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:5749 - loss:CosineSimilarityLoss widget: - source_sentence: A chef is preparing some food. sentences: - Five birds stand on the snow. - A chef prepared a meal. - There is no 'still' that is not relative to some other object. - source_sentence: A woman is adding oil on fishes. sentences: - Large cruise ship floating on the water. - It refers to the maximum f-stop (which is defined as the ratio of focal length to effective aperture diameter). - The woman is cutting potatoes. - source_sentence: The player shoots the winning points. sentences: - Minimum wage laws hurt the least skilled, least productive the most. - The basketball player is about to score points for his team. - Three televisions, on on the floor, the other two on a box. - source_sentence: Stars form in star-formation regions, which itself develop from molecular clouds. sentences: - Although I believe Searle is mistaken, I don't think you have found the problem. - It may be possible for a solar system like ours to exist outside of a galaxy. - A blond-haired child performing on the trumpet in front of a house while his younger brother watches. - source_sentence: While Queen may refer to both Queen regent (sovereign) or Queen consort, the King has always been the sovereign. sentences: - At first, I thought this is a bit of a tricky question. - A man plays the guitar. - There is a very good reason not to refer to the Queen's spouse as "King" - because they aren't the King. datasets: - sentence-transformers/stsb pipeline_tag: sentence-similarity library_name: sentence-transformers metrics: - pearson_cosine - spearman_cosine co2_eq_emissions: emissions: 0.08677984252410158 energy_consumed: 0.00022325545668430209 source: codecarbon training_type: fine-tuning on_cloud: false cpu_model: 13th Gen Intel(R) Core(TM) i7-13700K ram_total_size: 31.777088165283203 hours_used: 0.001 hardware_used: 1 x NVIDIA GeForce RTX 3090 model-index: - name: SentenceTransformer results: - task: type: semantic-similarity name: Semantic Similarity dataset: name: sts dev type: sts-dev metrics: - type: pearson_cosine value: 0.7290160790683643 name: Pearson Cosine - type: spearman_cosine value: 0.729048355335128 name: Spearman Cosine - task: type: semantic-similarity name: Semantic Similarity dataset: name: sts test type: sts-test metrics: - type: pearson_cosine value: 0.6451566569994759 name: Pearson Cosine - type: spearman_cosine value: 0.6304613140440366 name: Spearman Cosine --- # SentenceTransformer This is a [sentence-transformers](https://www.SBERT.net) model trained on the [stsb](https://huggingface.co/datasets/sentence-transformers/stsb) dataset. It maps sentences & paragraphs to a 512-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer <!-- - **Base model:** [Unknown](https://huggingface.co/unknown) --> - **Maximum Sequence Length:** None tokens - **Output Dimensionality:** 512 dimensions - **Similarity Function:** Cosine Similarity - **Training Dataset:** - [stsb](https://huggingface.co/datasets/sentence-transformers/stsb) - **Language:** en <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): BoW() (1): Dense({'in_features': 25000, 'out_features': 768, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'}) (2): Dense({'in_features': 768, 'out_features': 512, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("tomaarsen/wikipedia-tf-idf-bow") # Run inference sentences = [ 'While Queen may refer to both Queen regent (sovereign) or Queen consort, the King has always been the sovereign.', 'There is a very good reason not to refer to the Queen\'s spouse as "King" - because they aren\'t the King.', 'A man plays the guitar.', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 512] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Semantic Similarity * Datasets: `sts-dev` and `sts-test` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | sts-dev | sts-test | |:--------------------|:----------|:-----------| | pearson_cosine | 0.729 | 0.6452 | | **spearman_cosine** | **0.729** | **0.6305** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### stsb * Dataset: [stsb](https://huggingface.co/datasets/sentence-transformers/stsb) at [ab7a5ac](https://huggingface.co/datasets/sentence-transformers/stsb/tree/ab7a5ac0e35aa22088bdcf23e7fd99b220e53308) * Size: 5,749 training samples * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | <ul><li>min: 16 characters</li><li>mean: 31.92 characters</li><li>max: 113 characters</li></ul> | <ul><li>min: 16 characters</li><li>mean: 31.51 characters</li><li>max: 94 characters</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.45</li><li>max: 1.0</li></ul> | * Samples: | sentence1 | sentence2 | score | |:-----------------------------------------------------------|:----------------------------------------------------------------------|:------------------| | <code>A plane is taking off.</code> | <code>An air plane is taking off.</code> | <code>1.0</code> | | <code>A man is playing a large flute.</code> | <code>A man is playing a flute.</code> | <code>0.76</code> | | <code>A man is spreading shreded cheese on a pizza.</code> | <code>A man is spreading shredded cheese on an uncooked pizza.</code> | <code>0.76</code> | * Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters: ```json { "loss_fct": "torch.nn.modules.loss.MSELoss" } ``` ### Evaluation Dataset #### stsb * Dataset: [stsb](https://huggingface.co/datasets/sentence-transformers/stsb) at [ab7a5ac](https://huggingface.co/datasets/sentence-transformers/stsb/tree/ab7a5ac0e35aa22088bdcf23e7fd99b220e53308) * Size: 1,500 evaluation samples * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | <ul><li>min: 12 characters</li><li>mean: 57.37 characters</li><li>max: 144 characters</li></ul> | <ul><li>min: 17 characters</li><li>mean: 56.84 characters</li><li>max: 141 characters</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.42</li><li>max: 1.0</li></ul> | * Samples: | sentence1 | sentence2 | score | |:--------------------------------------------------|:------------------------------------------------------|:------------------| | <code>A man with a hard hat is dancing.</code> | <code>A man wearing a hard hat is dancing.</code> | <code>1.0</code> | | <code>A young child is riding a horse.</code> | <code>A child is riding a horse.</code> | <code>0.95</code> | | <code>A man is feeding a mouse to a snake.</code> | <code>The man is feeding a mouse to the snake.</code> | <code>1.0</code> | * Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters: ```json { "loss_fct": "torch.nn.modules.loss.MSELoss" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 32 - `num_train_epochs`: 1 - `warmup_ratio`: 0.1 - `fp16`: True #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 32 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 1 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `tp_size`: 0 - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | Validation Loss | sts-dev_spearman_cosine | sts-test_spearman_cosine | |:------:|:----:|:-------------:|:---------------:|:-----------------------:|:------------------------:| | 0.5556 | 100 | 0.0747 | 0.0443 | 0.7290 | - | | -1 | -1 | - | - | - | 0.6305 | ### Environmental Impact Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon). - **Energy Consumed**: 0.000 kWh - **Carbon Emitted**: 0.000 kg of CO2 - **Hours Used**: 0.001 hours ### Training Hardware - **On Cloud**: No - **GPU Model**: 1 x NVIDIA GeForce RTX 3090 - **CPU Model**: 13th Gen Intel(R) Core(TM) i7-13700K - **RAM Size**: 31.78 GB ### Framework Versions - Python: 3.11.6 - Sentence Transformers: 4.2.0.dev0 - Transformers: 4.50.1 - PyTorch: 2.6.0+cu124 - Accelerate: 1.5.1 - Datasets: 2.21.0 - Tokenizers: 0.21.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
kiwikiw/mingad2
kiwikiw
2025-04-30T10:43:14Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T10:39:13Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/Eratosthenes-Polymath-14B-Instruct-i1-GGUF
mradermacher
2025-04-30T10:41:03Z
0
0
transformers
[ "transformers", "gguf", "text-generation-inference", "coder", "Math", "RL", "en", "base_model:prithivMLmods/Eratosthenes-Polymath-14B-Instruct", "base_model:quantized:prithivMLmods/Eratosthenes-Polymath-14B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-04-29T01:08:14Z
--- base_model: prithivMLmods/Eratosthenes-Polymath-14B-Instruct language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - text-generation-inference - coder - Math - RL --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/prithivMLmods/Eratosthenes-Polymath-14B-Instruct <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Eratosthenes-Polymath-14B-Instruct-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Eratosthenes-Polymath-14B-Instruct-i1-GGUF/resolve/main/Eratosthenes-Polymath-14B-Instruct.i1-IQ1_S.gguf) | i1-IQ1_S | 3.7 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Eratosthenes-Polymath-14B-Instruct-i1-GGUF/resolve/main/Eratosthenes-Polymath-14B-Instruct.i1-IQ1_M.gguf) | i1-IQ1_M | 4.0 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Eratosthenes-Polymath-14B-Instruct-i1-GGUF/resolve/main/Eratosthenes-Polymath-14B-Instruct.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Eratosthenes-Polymath-14B-Instruct-i1-GGUF/resolve/main/Eratosthenes-Polymath-14B-Instruct.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/Eratosthenes-Polymath-14B-Instruct-i1-GGUF/resolve/main/Eratosthenes-Polymath-14B-Instruct.i1-IQ2_S.gguf) | i1-IQ2_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Eratosthenes-Polymath-14B-Instruct-i1-GGUF/resolve/main/Eratosthenes-Polymath-14B-Instruct.i1-IQ2_M.gguf) | i1-IQ2_M | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/Eratosthenes-Polymath-14B-Instruct-i1-GGUF/resolve/main/Eratosthenes-Polymath-14B-Instruct.i1-Q2_K_S.gguf) | i1-Q2_K_S | 5.5 | very low quality | | [GGUF](https://huggingface.co/mradermacher/Eratosthenes-Polymath-14B-Instruct-i1-GGUF/resolve/main/Eratosthenes-Polymath-14B-Instruct.i1-Q2_K.gguf) | i1-Q2_K | 5.9 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Eratosthenes-Polymath-14B-Instruct-i1-GGUF/resolve/main/Eratosthenes-Polymath-14B-Instruct.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 6.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Eratosthenes-Polymath-14B-Instruct-i1-GGUF/resolve/main/Eratosthenes-Polymath-14B-Instruct.i1-IQ3_XS.gguf) | i1-IQ3_XS | 6.5 | | | [GGUF](https://huggingface.co/mradermacher/Eratosthenes-Polymath-14B-Instruct-i1-GGUF/resolve/main/Eratosthenes-Polymath-14B-Instruct.i1-Q3_K_S.gguf) | i1-Q3_K_S | 6.8 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Eratosthenes-Polymath-14B-Instruct-i1-GGUF/resolve/main/Eratosthenes-Polymath-14B-Instruct.i1-IQ3_S.gguf) | i1-IQ3_S | 6.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Eratosthenes-Polymath-14B-Instruct-i1-GGUF/resolve/main/Eratosthenes-Polymath-14B-Instruct.i1-IQ3_M.gguf) | i1-IQ3_M | 7.0 | | | [GGUF](https://huggingface.co/mradermacher/Eratosthenes-Polymath-14B-Instruct-i1-GGUF/resolve/main/Eratosthenes-Polymath-14B-Instruct.i1-Q3_K_M.gguf) | i1-Q3_K_M | 7.4 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Eratosthenes-Polymath-14B-Instruct-i1-GGUF/resolve/main/Eratosthenes-Polymath-14B-Instruct.i1-Q3_K_L.gguf) | i1-Q3_K_L | 8.0 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Eratosthenes-Polymath-14B-Instruct-i1-GGUF/resolve/main/Eratosthenes-Polymath-14B-Instruct.i1-IQ4_XS.gguf) | i1-IQ4_XS | 8.2 | | | [GGUF](https://huggingface.co/mradermacher/Eratosthenes-Polymath-14B-Instruct-i1-GGUF/resolve/main/Eratosthenes-Polymath-14B-Instruct.i1-Q4_0.gguf) | i1-Q4_0 | 8.6 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Eratosthenes-Polymath-14B-Instruct-i1-GGUF/resolve/main/Eratosthenes-Polymath-14B-Instruct.i1-IQ4_NL.gguf) | i1-IQ4_NL | 8.6 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Eratosthenes-Polymath-14B-Instruct-i1-GGUF/resolve/main/Eratosthenes-Polymath-14B-Instruct.i1-Q4_K_S.gguf) | i1-Q4_K_S | 8.7 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Eratosthenes-Polymath-14B-Instruct-i1-GGUF/resolve/main/Eratosthenes-Polymath-14B-Instruct.i1-Q4_K_M.gguf) | i1-Q4_K_M | 9.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Eratosthenes-Polymath-14B-Instruct-i1-GGUF/resolve/main/Eratosthenes-Polymath-14B-Instruct.i1-Q4_1.gguf) | i1-Q4_1 | 9.5 | | | [GGUF](https://huggingface.co/mradermacher/Eratosthenes-Polymath-14B-Instruct-i1-GGUF/resolve/main/Eratosthenes-Polymath-14B-Instruct.i1-Q5_K_S.gguf) | i1-Q5_K_S | 10.4 | | | [GGUF](https://huggingface.co/mradermacher/Eratosthenes-Polymath-14B-Instruct-i1-GGUF/resolve/main/Eratosthenes-Polymath-14B-Instruct.i1-Q5_K_M.gguf) | i1-Q5_K_M | 10.6 | | | [GGUF](https://huggingface.co/mradermacher/Eratosthenes-Polymath-14B-Instruct-i1-GGUF/resolve/main/Eratosthenes-Polymath-14B-Instruct.i1-Q6_K.gguf) | i1-Q6_K | 12.2 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
Dan-AiTuning/calculator_agent_qwen2.5_3b
Dan-AiTuning
2025-04-30T10:40:43Z
5
1
null
[ "safetensors", "qwen2", "agent", "grpo", "mult-turn-rl", "base_model:Qwen/Qwen2.5-3B-Instruct", "base_model:finetune:Qwen/Qwen2.5-3B-Instruct", "region:us" ]
null
2025-04-25T21:33:00Z
--- base_model: - Qwen/Qwen2.5-3B-Instruct tags: - agent - grpo - mult-turn-rl --- # Qwen 2.5 3B – Calculator Agent This is a fine-tuned version of [Qwen 2.5 3B Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) trained to use a calculator tool through multi-turn reinforcement learning with GRPO. A lighter 0.5B model was also trained and can be found [here](https://huggingface.co/Dan-AiTuning/calculator_agent_qwen2.5_0.5b). **[This Github repo](https://github.com/Danau5tin/calculator_agent_rl) shows in depth training run process details** --- ## 🔧 Model Description The Qwen 2.5 3B model has been enhanced to interact with a recursive calculator environment that supports four basic arithmetic operations. The agent generates structured tool calls in both XML and YAML format, enabling precise execution of complex expressions. After the calculation is performed by the environment, the model formulates a final human-readable answer. --- ## ✅ Key Achievements - **Training Method**: GRPO, using a hybrid reward signal combining LLM-as-a-judge feedback (Claude-3.5-Haiku) and programmatic verification. - **Evaluation Accuracy**: - Before RL: **27%** - After RL: **89%** - **Absolute Gain: +62 pts** - **Training Cost**: ~$23.50 (~£17.55) on 4x A100 (80GB) GPUs - **Total Training Time**: ~3 hours --- ## 🧪 Evaluation Dataset The evaluation dataset consists of synthetically generated arithmetic problems designed to be difficult for humans to solve without a calculator. Questions include nested operations and real-world phrasing diversity. [Download the eval dataset](https://github.com/Danau5tin/agentic_environments/blob/qwen/examples/calculator_agent/datasets/basic_calculations_eval.csv) --- ## 🛠️ Usage Instructions ### Requirements - vLLM or Transformers pipeline - Flash Attention recommended for speed - For training/RL: see full setup in [GitHub repo](https://github.com/Dan-AiTuning/calculator_agent_rl) ### Example Input: ```text Find the product of 876 and 543, subtract the quotient of 876 divided by 12, and tell me the result. ``` ### Expected Output: ```xml <calculator> operation: subtract operands: - operation: multiply operands: - 876 - 543 - operation: divide operands: - 876 - 12 </calculator> ``` This output must be passed to the environment to be parsed & calculated. Example in python [here](https://github.com/Danau5tin/calculator_agent_rl/tree/main/src/environment/) The output from the environment should be provided to model as: ```xml <output> {tool output} </output> ``` Then the model will generate it's final response: ```text The final result of the calculation is 475,041. ``` --- ## 📬 License and Attribution - Base model: [Qwen 2.5 3B Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) - Fine-tuned by: Dan Austin - Repository: [GitHub Project](https://github.com/Dan-AiTuning/calculator_agent_rl) ## 🧠 Training Framework Acknowledgement This model was trained using parts of the [Verifiers](https://github.com/willccbb/verifiers) framework for structured reinforcement learning. If you use this model or build upon this work, please consider citing: ``` @article {brown2025verifiers, title={Verifiers: Reinforcement Learning with LLMs in Verifiable Environments}, author={Brown, William}, year={2025} } ```
TOMFORD79/Hano
TOMFORD79
2025-04-30T10:37:03Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-04-30T10:09:51Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
kiwikiw/mingad
kiwikiw
2025-04-30T10:35:12Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T10:30:16Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
maksf8486/bb8ee146-b69a-485e-beb7-392d4059d150
maksf8486
2025-04-30T10:33:33Z
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:NousResearch/Nous-Hermes-llama-2-7b", "base_model:adapter:NousResearch/Nous-Hermes-llama-2-7b", "license:mit", "8-bit", "bitsandbytes", "region:us" ]
null
2025-04-30T09:59:52Z
--- library_name: peft license: mit base_model: NousResearch/Nous-Hermes-llama-2-7b tags: - axolotl - generated_from_trainer model-index: - name: bb8ee146-b69a-485e-beb7-392d4059d150 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml absolute_data_files: false adapter: lora base_model: NousResearch/Nous-Hermes-llama-2-7b bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 5cfb94c383f95340_train_data.json ds_type: json format: custom path: /workspace/input_data/5cfb94c383f95340_train_data.json type: field_instruction: instruction field_output: chosen_response format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null dpo: beta: 0.1 enabled: true group_by_length: false rank_loss: false reference_model: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.5 group_by_length: false hub_model_id: maksf8486/bb8ee146-b69a-485e-beb7-392d4059d150 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-06 load_in_4bit: false load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/5cfb94c383f95340_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 10dc235b-06a9-410c-a72b-3ec423544136 wandb_project: s56-2 wandb_run: your_name wandb_runid: 10dc235b-06a9-410c-a72b-3ec423544136 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # bb8ee146-b69a-485e-beb7-392d4059d150 This model is a fine-tuned version of [NousResearch/Nous-Hermes-llama-2-7b](https://huggingface.co/NousResearch/Nous-Hermes-llama-2-7b) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0103 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.9359 | 0.0244 | 200 | 1.0103 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
LarryAIDraw/Jinhsi_Khan-03
LarryAIDraw
2025-04-30T10:23:11Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2025-04-30T09:13:07Z
--- license: creativeml-openrail-m --- https://civitai.com/models/944920/jinhsi-wuthering-waves-3-outfits
kallilikhitha123/llama-Quantized-Model-8b-473_1_30-04-2025_1step
kallilikhitha123
2025-04-30T10:22:43Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-04-30T09:38:50Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ybq0509/des_Q_7B_ckpt1106
ybq0509
2025-04-30T10:22:33Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T10:15:30Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jaruiz/q-FrozenLake-v1-4x4-noSlippery
jaruiz
2025-04-30T10:20:54Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2025-04-30T10:20:51Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="jaruiz/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
PrMoriarty/ppo-LunarLander-v2
PrMoriarty
2025-04-30T10:16:40Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-04-29T17:39:15Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 250.81 +/- 17.22 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
SimpleStories/SimpleStories-11M
SimpleStories
2025-04-30T10:14:20Z
5
0
null
[ "safetensors", "llama", "small-language-model", "story-generation", "text-generation", "efficient-nlp", "distilled-models", "en", "dataset:lennart-finke/SimpleStories", "arxiv:2504.09184", "license:mit", "region:us" ]
text-generation
2025-04-22T14:16:43Z
--- license: mit datasets: - lennart-finke/SimpleStories language: - en tags: - small-language-model - story-generation - text-generation - efficient-nlp - distilled-models --- # SimpleStories Model Family The SimpleStories models are a tiny model family created for interpretability research, trained on the [SimpleStories dataset](https://huggingface.co/datasets/lennart-finke/SimpleStories). ## Usage ```python import torch from transformers import AutoTokenizer, LlamaForCausalLM MODEL_SIZE = "11M" model_path = "SimpleStories/SimpleStories-{}".format(MODEL_SIZE) tokenizer = AutoTokenizer.from_pretrained(model_path) model = LlamaForCausalLM.from_pretrained(model_path) model.to("cuda") model.eval() prompt = "The curious cat looked at the" inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False) input_ids = inputs.input_ids.to("cuda") eos_token_id = 1 with torch.no_grad(): output_ids = model.generate( input_ids=input_ids, max_new_tokens=400, temperature=0.7, do_sample=True, eos_token_id=eos_token_id ) output_text = tokenizer.decode(output_ids[0], skip_special_tokens=True) print(f"\nGenerated text:\n{output_text}") ``` ## Model Variants | Model Name | n_params | n_layers | d_model | n_heads | n_ctx | d_vocab | |------------|----------|----------|---------|---------|-------|---------| | SimpleStories-35M | 35 million | 12 | 512 | 8 | 512 | 4096 | | SimpleStories-30M | 30 million | 10 | 512 | 8 | 512 | 4096 | | SimpleStories-11M | 11 million | 6 | 384 | 6 | 512 | 4096 | | SimpleStories-5M | 5 million | 6 | 256 | 4 | 512 | 4096 | | SimpleStories-1.25M | 1.25 million | 4 | 128 | 4 | 512 | 4096 | ## Performance Comparison Model-evaluated generation quality metrics: <p align="center"> <img width="80%" src="figures/simplestories_comparison.png"> </p> ## Tokenizer We use a custom WordPiece tokenizer with a small vocabulary size of 4096. We conducted morphological analysis and coverage gain analysis on the dataset to build a small tokenizer without compromising on the quality of generation. ## Dataset The SimpleStories dataset is a collection of short stories generated by state-of-the-art language models. It features: - Story annotation with high-level concepts: theme, topic, style, etc. - Higher semantic and syntactic diversity through seeded story generation - Generated by 2024 models - Several NLP-metrics pre-computed to aid filtering - ASCII-only guarantee for the English dataset Read the dataset paper on [arXiv](https://arxiv.org/abs/2504.09184). ## Training The training and evaluation scripts can be accessed at https://github.com/danbraunai/simple_stories_train
AXERA-TECH/Qwen3-1.7B
AXERA-TECH
2025-04-30T10:14:04Z
0
0
null
[ "Qwen", "Qwen3", "Int8", "text-generation", "en", "base_model:Qwen/Qwen3-1.7B", "base_model:finetune:Qwen/Qwen3-1.7B", "license:apache-2.0", "region:us" ]
text-generation
2025-04-30T09:05:24Z
--- license: apache-2.0 language: - en base_model: - Qwen/Qwen3-1.7B pipeline_tag: text-generation tags: - Qwen - Qwen3 - Int8 --- # Qwen3-1.7B-Int8 This version of Qwen3-1.7B-Int8 has been converted to run on the Axera NPU using **w8a16** quantization. This model has been optimized with the following LoRA: Compatible with Pulsar2 version: 4.0-temp(Not released yet) ## Convert tools links: For those who are interested in model conversion, you can try to export axmodel through the original repo : https://huggingface.co/Qwen/Qwen3-1.7B [Pulsar2 Link, How to Convert LLM from Huggingface to axmodel](https://pulsar2-docs.readthedocs.io/en/latest/appendix/build_llm.html) [AXera NPU LLM Runtime](https://github.com/AXERA-TECH/ax-llm) ## Support Platform - AX650 - [M4N-Dock(爱芯派Pro)](https://wiki.sipeed.com/hardware/zh/maixIV/m4ndock/m4ndock.html) - [M.2 Accelerator card](https://axcl-docs.readthedocs.io/zh-cn/latest/doc_guide_hardware.html) |Chips|w8a16|w4a16| |--|--|--| |AX650| 9.5 tokens/sec|TBD| ## How to use Download all files from this repository to the device ``` root@ax650:/mnt/qtang/llm-test/qwen3-1.7b# tree -L 1 . |-- config.json |-- main_ax650 |-- main_axcl_aarch64 |-- main_axcl_x86 |-- post_config.json |-- qwen2.5_tokenizer |-- qwen3-1.7b-ax650 |-- qwen3_tokenizer |-- qwen3_tokenizer_uid.py |-- run_qwen3_1.7b_int8_ctx_ax650.sh |-- run_qwen3_1.7b_int8_ctx_axcl_aarch64.sh `-- run_qwen3_1.7b_int8_ctx_axcl_x86.sh 3 directories, 9 files root@ax650:/mnt/qtang/llm-test/qwen3-1.7b# ``` #### Start the Tokenizer service Install requirement ``` pip install transformers jinja2 ``` ``` root@ax650:/mnt/qtang/llm-test/qwen3-1.7b# python3 qwen3_tokenizer_uid.py None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used. Server running at http://0.0.0.0:12345 ``` #### Inference with AX650 Host, such as M4N-Dock(爱芯派Pro) or AX650N DEMO Board Open another terminal and run `run_qwen3_1.7b_int8_ctx_ax650.sh` ``` root@ax650:/mnt/qtang/llm-test/qwen3-1.7b# ./run_qwen3_1.7b_int8_ctx_ax650.sh [I][ Init][ 110]: LLM init start [I][ Init][ 34]: connect http://127.0.0.1:12345 ok [I][ Init][ 57]: uid: 7a057c11-c513-485f-84a1-1d28dcbeb89d bos_id: -1, eos_id: 151645 3% | ██ | 1 / 31 [3.97s<123.16s, 0.25 count/s] tokenizer init ok [I][ Init][ 26]: LLaMaEmbedSelector use mmap 100% | ████████████████████████████████ | 31 / 31 [23.76s<23.76s, 1.30 count/s] init post axmodel ok,remain_cmm(8740 MB) [I][ Init][ 188]: max_token_len : 2559 [I][ Init][ 193]: kv_cache_size : 1024, kv_cache_num: 2559 [I][ Init][ 201]: prefill_token_num : 128 [I][ Init][ 205]: grp: 1, prefill_max_token_num : 1 [I][ Init][ 205]: grp: 2, prefill_max_token_num : 512 [I][ Init][ 205]: grp: 3, prefill_max_token_num : 1024 [I][ Init][ 205]: grp: 4, prefill_max_token_num : 1536 [I][ Init][ 205]: grp: 5, prefill_max_token_num : 2048 [I][ Init][ 209]: prefill_max_token_num : 2048 [I][ load_config][ 282]: load config: { "enable_repetition_penalty": false, "enable_temperature": false, "enable_top_k_sampling": true, "enable_top_p_sampling": false, "penalty_window": 20, "repetition_penalty": 1.2, "temperature": 0.9, "top_k": 1, "top_p": 0.8 } [I][ Init][ 218]: LLM init ok Type "q" to exit, Ctrl+c to stop current running [I][ GenerateKVCachePrefill][ 270]: input token num : 21, prefill_split_num : 1 prefill_grpid : 2 [I][ GenerateKVCachePrefill][ 307]: input_num_token:21 [I][ main][ 230]: precompute_len: 21 [I][ main][ 231]: system_prompt: You are Qwen, created by Alibaba Cloud. You are a helpful assistant. prompt >> 1+1=? [I][ SetKVCache][ 530]: prefill_grpid:2 kv_cache_num:512 precompute_len:21 input_num_token:16 [I][ SetKVCache][ 533]: current prefill_max_token_num:1920 [I][ Run][ 659]: input token num : 16, prefill_split_num : 1 [I][ Run][ 685]: input_num_token:16 [I][ Run][ 808]: ttft: 678.72 ms <think> </think> 1 + 1 = 2. [N][ Run][ 922]: hit eos,avg 9.16 token/s [I][ GetKVCache][ 499]: precompute_len:49, remaining:1999 prompt >> who are you? [I][ SetKVCache][ 530]: prefill_grpid:2 kv_cache_num:512 precompute_len:49 input_num_token:16 [I][ SetKVCache][ 533]: current prefill_max_token_num:1920 [I][ Run][ 659]: input token num : 16, prefill_split_num : 1 [I][ Run][ 685]: input_num_token:16 [I][ Run][ 808]: ttft: 677.87 ms <think> </think> I am Qwen, a large language model developed by Alibaba Cloud. I can answer questions, help with tasks, and provide information on various topics. I am designed to be helpful and useful to users. [N][ Run][ 922]: hit eos,avg 9.13 token/s [I][ GetKVCache][ 499]: precompute_len:110, remaining:1938 prompt >> q ``` #### Inference with M.2 Accelerator card [What is M.2 Accelerator card?](https://axcl-docs.readthedocs.io/zh-cn/latest/doc_guide_hardware.html), Show this DEMO based on Raspberry PI 5. ``` (base) axera@raspberrypi:~/samples/qwen3-1.7b $ ./run_qwen3_1.7b_int8_ctx_axcl_aarch64.sh [I][ Init][ 136]: LLM init start [I][ Init][ 34]: connect http://127.0.0.1:12345 ok [I][ Init][ 57]: uid: ea509ef6-ab6c-49b0-9dcf-931db2ce1bf7 bos_id: -1, eos_id: 151645 3% | ██ | 1 / 31 [0.98s<30.47s, 1.02 count/s] tokenizer init ok [I][ Init][ 45]: LLaMaEmbedSelector use mmap 6% | ███ | 2 / 31 [0.98s<15.24s, 2.03 count/s] embed_selector init ok [I][ run][ 30]: AXCLWorker start with devid 0 100% | ████████████████████████████████ | 31 / 31 [49.40s<49.40s, 0.63 count/s] init post axmodel ok,remain_cmm(3788 MB) [I][ Init][ 237]: max_token_len : 2559 [I][ Init][ 240]: kv_cache_size : 1024, kv_cache_num: 2559 [I][ Init][ 248]: prefill_token_num : 128 [I][ Init][ 252]: grp: 1, prefill_max_token_num : 1 [I][ Init][ 252]: grp: 2, prefill_max_token_num : 512 [I][ Init][ 252]: grp: 3, prefill_max_token_num : 1024 [I][ Init][ 252]: grp: 4, prefill_max_token_num : 1536 [I][ Init][ 252]: grp: 5, prefill_max_token_num : 2048 [I][ Init][ 256]: prefill_max_token_num : 2048 ________________________ | ID| remain cmm(MB)| ======================== | 0| 3788| ¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯ [I][ load_config][ 282]: load config: { "enable_repetition_penalty": false, "enable_temperature": false, "enable_top_k_sampling": true, "enable_top_p_sampling": false, "penalty_window": 20, "repetition_penalty": 1.2, "temperature": 0.9, "top_k": 1, "top_p": 0.8 } [I][ Init][ 279]: LLM init ok Type "q" to exit, Ctrl+c to stop current running [I][ GenerateKVCachePrefill][ 335]: input token num : 21, prefill_split_num : 1 prefill_grpid : 2 [I][ GenerateKVCachePrefill][ 372]: input_num_token:21 [I][ main][ 236]: precompute_len: 21 [I][ main][ 237]: system_prompt: You are Qwen, created by Alibaba Cloud. You are a helpful assistant. prompt >> 1+2=? [I][ SetKVCache][ 628]: prefill_grpid:2 kv_cache_num:512 precompute_len:21 input_num_token:16 [I][ SetKVCache][ 631]: current prefill_max_token_num:1920 [I][ Run][ 869]: input token num : 16, prefill_split_num : 1 [I][ Run][ 901]: input_num_token:16 [I][ Run][1030]: ttft: 796.97 ms <think> </think> 1 + 2 = 3. [N][ Run][1182]: hit eos,avg 7.43 token/s [I][ GetKVCache][ 597]: precompute_len:49, remaining:1999 prompt >> who are you? [I][ SetKVCache][ 628]: prefill_grpid:2 kv_cache_num:512 precompute_len:49 input_num_token:16 [I][ SetKVCache][ 631]: current prefill_max_token_num:1920 [I][ Run][ 869]: input token num : 16, prefill_split_num : 1 [I][ Run][ 901]: input_num_token:16 [I][ Run][1030]: ttft: 800.01 ms <think> </think> I am Qwen, a large language model developed by Alibaba Cloud. I can help with various tasks, such as answering questions, writing text, providing explanations, and more. If you have any questions or need assistance, feel free to ask! [N][ Run][1182]: hit eos,avg 7.42 token/s [I][ GetKVCache][ 597]: precompute_len:118, remaining:1930 prompt >> q [I][ run][ 80]: AXCLWorker exit with devid 0 (base) axera@raspberrypi:~/samples/qwen3-1.7b $ (base) axera@raspberrypi:~ $ axcl-smi +------------------------------------------------------------------------------------------------+ | AXCL-SMI V3.4.0_20250423020139 Driver V3.4.0_20250423020139 | +-----------------------------------------+--------------+---------------------------------------+ | Card Name Firmware | Bus-Id | Memory-Usage | | Fan Temp Pwr:Usage/Cap | CPU NPU | CMM-Usage | |=========================================+==============+=======================================| | 0 AX650N V3.4.0 | 0000:01:00.0 | 183 MiB / 945 MiB | | -- 38C -- / -- | 0% 0% | 3251 MiB / 7040 MiB | +-----------------------------------------+--------------+---------------------------------------+ +------------------------------------------------------------------------------------------------+ | Processes: | | Card PID Process Name NPU Memory Usage | |================================================================================================| | 0 71266 /home/axera/samples/qwen3-1.7b/main_axcl_aarch64 2193524 KiB | +------------------------------------------------------------------------------------------------+ (base) axera@raspberrypi:~ $ ```
Qwe1325/gemma2-2b-it-tw-Q4_K_M-GGUF
Qwe1325
2025-04-30T10:13:33Z
0
0
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "text-generation", "zh", "dataset:yentinglin/TaiwanChat", "base_model:jslin09/gemma2-2b-it-tw", "base_model:quantized:jslin09/gemma2-2b-it-tw", "license:gemma", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-04-30T10:13:24Z
--- base_model: jslin09/gemma2-2b-it-tw datasets: - yentinglin/TaiwanChat language: - zh library_name: transformers license: gemma pipeline_tag: text-generation tags: - llama-cpp - gguf-my-repo --- # Qwe1325/gemma2-2b-it-tw-Q4_K_M-GGUF This model was converted to GGUF format from [`jslin09/gemma2-2b-it-tw`](https://huggingface.co/jslin09/gemma2-2b-it-tw) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/jslin09/gemma2-2b-it-tw) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Qwe1325/gemma2-2b-it-tw-Q4_K_M-GGUF --hf-file gemma2-2b-it-tw-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Qwe1325/gemma2-2b-it-tw-Q4_K_M-GGUF --hf-file gemma2-2b-it-tw-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Qwe1325/gemma2-2b-it-tw-Q4_K_M-GGUF --hf-file gemma2-2b-it-tw-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Qwe1325/gemma2-2b-it-tw-Q4_K_M-GGUF --hf-file gemma2-2b-it-tw-q4_k_m.gguf -c 2048 ```
kjsbrian/mango-recall-classifier
kjsbrian
2025-04-30T10:10:47Z
57
0
null
[ "safetensors", "electra", "text-classification", "license:mit", "region:us" ]
text-classification
2025-04-26T02:42:48Z
--- license: mit pipeline_tag: text-classification ---
18-Jobz-Hunting-Sajal-Malik-Viral-Video-Xn/Full.Clip.Jobz.Hunting.Sajal.Malik.Viral.Video.Original.Link
18-Jobz-Hunting-Sajal-Malik-Viral-Video-Xn
2025-04-30T10:05:50Z
0
0
null
[ "region:us" ]
null
2025-04-30T10:04:52Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/5n7shfr3?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> Sajal Malik's viral video is trending across social media, sparking widespread interest. This post covers what’s actually happening, separating facts from speculation. We dive into how the video gained traction, public reactions, and why it’s making headlines. This article strictly follows Blogger and AdSense guidelines, offering an educational and respectful analysis. Learn what’s true, what’s exaggerated, and why it matters in the age of viral content. Stay informed and avoid misinformation by reading the full story behind the Sajal Malik viral video trending
convaiinnovations/hindi_llm_moe
convaiinnovations
2025-04-30T10:05:33Z
0
0
null
[ "safetensors", "region:us" ]
null
2025-04-30T09:56:27Z
# Hindi Embedding Foundational Model This is a multilingual causal language model with a focus on Hindi text generation. The model uses a custom architecture with several advanced features: - Mixture of Experts (MoE) for more efficient and scalable parameter usage - Rotary Position Embeddings (RoPE) for improved handling of positional information - Grouped Query Attention (GQA) for efficient attention computation - Language embeddings for multilingual support - Initial CNN layer for improved token representation ## Model Details - **Type:** Causal Language Model (auto-regressive) - **Framework:** PyTorch (custom architecture) - **Language Support:** Primary focus on Hindi - **License:** Apache 2.0 - **Developed by:** ConvaiInnovations ## Usage This model requires custom architecture files for inference. You need to include the following Python modules in your project: - `convaicausallm_model_with_moe_rope.py`: Contains the model architecture - `hindi_embeddings.py`: Contains the SentencePiece tokenizer wrapper ### Sample Code ```python import torch from convaicausallm_model_with_moe_rope import ConvaiCausalLMConfig, ConvaiCausalLM from hindi_embeddings import SentencePieceTokenizerWrapper from safetensors.torch import load_file import json # Load model and tokenizer tokenizer = SentencePieceTokenizerWrapper("tokenizer.model") config_path = "config.json" with open(config_path, "r") as f: config_dict = json.load(f) config = ConvaiCausalLMConfig(**config_dict) model = ConvaiCausalLM(config) state_dict = load_file("model.safetensors") model.load_state_dict(state_dict) # Generate text input_text = "भारत की राजधानी क्या है?" input_ids = tokenizer.sp_model.EncodeAsIds(input_text) input_ids_tensor = torch.tensor([input_ids], dtype=torch.long) lang_id = torch.tensor([0], dtype=torch.long) # Language ID for Hindi # Forward pass outputs = model(input_ids=input_ids_tensor, lang_ids=lang_id, char_ids=None) next_token_logits = outputs["logits"][:, -1, :] next_token = torch.argmax(next_token_logits, dim=-1).unsqueeze(-1) # Continue generation as needed... ``` See `generate_multilingual.py` for a complete text generation implementation. ## Limitations This is an early version of the model with the following limitations: - Limited contextual knowledge - May generate inaccurate or nonsensical information - Performance varies depending on input prompt and generation parameters ## Acknowledgments This work builds upon advancements in language model architecture and training techniques from the research community.
WTNLXTBL/Qwen3-4B-Base-Q4_K_M-GGUF
WTNLXTBL
2025-04-30T10:01:08Z
0
0
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "base_model:Qwen/Qwen3-4B-Base", "base_model:quantized:Qwen/Qwen3-4B-Base", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-04-30T10:00:55Z
--- base_model: Qwen/Qwen3-4B-Base library_name: transformers license: apache-2.0 tags: - llama-cpp - gguf-my-repo --- # WTNLXTBL/Qwen3-4B-Base-Q4_K_M-GGUF This model was converted to GGUF format from [`Qwen/Qwen3-4B-Base`](https://huggingface.co/Qwen/Qwen3-4B-Base) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Qwen/Qwen3-4B-Base) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo WTNLXTBL/Qwen3-4B-Base-Q4_K_M-GGUF --hf-file qwen3-4b-base-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo WTNLXTBL/Qwen3-4B-Base-Q4_K_M-GGUF --hf-file qwen3-4b-base-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo WTNLXTBL/Qwen3-4B-Base-Q4_K_M-GGUF --hf-file qwen3-4b-base-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo WTNLXTBL/Qwen3-4B-Base-Q4_K_M-GGUF --hf-file qwen3-4b-base-q4_k_m.gguf -c 2048 ```
prithivMLmods/Qwen3-4B-ft-bf16-Q8_0-GGUF
prithivMLmods
2025-04-30T10:00:56Z
0
0
transformers
[ "transformers", "gguf", "text-generation-inference", "moe", "moderately abliterated variant", "llama-cpp", "gguf-my-repo", "Qwen3", "text-generation", "en", "base_model:prithivMLmods/Qwen3-4B-ft-bf16", "base_model:quantized:prithivMLmods/Qwen3-4B-ft-bf16", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-04-30T09:56:50Z
--- base_model: prithivMLmods/Qwen3-4B-ft-bf16 language: - en library_name: transformers license: apache-2.0 pipeline_tag: text-generation tags: - text-generation-inference - moe - moderately abliterated variant - llama-cpp - gguf-my-repo - Qwen3 --- # prithivMLmods/Qwen3-4B-ft-bf16-Q8_0-GGUF This model was converted to GGUF format from [`prithivMLmods/Qwen3-4B-ft-bf16`](https://huggingface.co/prithivMLmods/Qwen3-4B-ft-bf16) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/prithivMLmods/Qwen3-4B-ft-bf16) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo prithivMLmods/Qwen3-4B-ft-bf16-Q8_0-GGUF --hf-file qwen3-4b-ft-bf16-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo prithivMLmods/Qwen3-4B-ft-bf16-Q8_0-GGUF --hf-file qwen3-4b-ft-bf16-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo prithivMLmods/Qwen3-4B-ft-bf16-Q8_0-GGUF --hf-file qwen3-4b-ft-bf16-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo prithivMLmods/Qwen3-4B-ft-bf16-Q8_0-GGUF --hf-file qwen3-4b-ft-bf16-q8_0.gguf -c 2048 ```
gushanjishui/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-smooth_snappy_hedgehog
gushanjishui
2025-04-30T10:00:32Z
15
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am smooth snappy hedgehog", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-13T13:32:53Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-smooth_snappy_hedgehog tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am smooth snappy hedgehog - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-smooth_snappy_hedgehog This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="gushanjishui/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-smooth_snappy_hedgehog", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
skywalker290/Meta-Llama-3.1-8B-Instruct
skywalker290
2025-04-30T06:26:52Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-30T06:13:58Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
abharadwaj123/skywork-2b-fine-tuned-length-1000-3
abharadwaj123
2025-04-30T06:26:41Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-30T06:26:39Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hcharm/gemma-medical-qa-finetune_adjust
hcharm
2025-04-30T06:25:44Z
0
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T06:19:59Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
siddhant71197/female_lean_bald_v2
siddhant71197
2025-04-30T06:21:56Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-04-30T05:42:22Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: Sidf --- # Female_Lean_Bald_V2 <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `Sidf` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "Sidf", "lora_weights": "https://huggingface.co/siddhant71197/female_lean_bald_v2/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('siddhant71197/female_lean_bald_v2', weight_name='lora.safetensors') image = pipeline('Sidf').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/siddhant71197/female_lean_bald_v2/discussions) to add images that show off what you’ve made with this LoRA.
jinx2321/base-tagged-1e4-paper
jinx2321
2025-04-30T06:20:52Z
0
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:everdoubling/byt5-Korean-base", "base_model:finetune:everdoubling/byt5-Korean-base", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2025-04-29T07:21:14Z
--- library_name: transformers license: apache-2.0 base_model: everdoubling/byt5-Korean-base tags: - generated_from_trainer model-index: - name: base-tagged-1e4-paper results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # base-tagged-1e4-paper This model is a fine-tuned version of [everdoubling/byt5-Korean-base](https://huggingface.co/everdoubling/byt5-Korean-base) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 128 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.52.0.dev0 - Pytorch 2.6.0+cu124 - Datasets 3.4.1 - Tokenizers 0.21.1
OpenDFM/ChemDFM-X-v1.0-13B
OpenDFM
2025-04-30T06:20:46Z
13
3
null
[ "safetensors", "llama", "license:agpl-3.0", "region:us" ]
null
2025-01-20T13:41:36Z
--- license: agpl-3.0 --- # ChemDFM-X: Towards Large Multimodal Model for Chemistry ## Index - [Introduction](#introduction) - [Getting Started](#getting-started) - [Usage](#usage) - [Example](#example) - [Citation](#citation) - [Disclaimer](#disclaimer) - [Contact](#contact) ## Introduction ChemDFM-X is a multimodal model for chemisty, supporting 5 modality files: molecule graph (2D), molecule comformer (3D), molecule picture, mass spectra (MS) and infrared spectrum (IR). Every modality data is encoded by a modality encoder: [MoleBERT](https://github.com/junxia97/Mole-BERT), [Uni-Mol](https://github.com/deepmodeling/Uni-Mol/tree/main/unimol), [CLIP](https://github.com/openai/CLIP), and the transformer encoders trained by ourself. [Paper](https://www.sciengine.com/SCIS/doi/10.1007/s11432-024-4243-0) &nbsp; [GitHub](https://github.com/OpenDFM/ChemDFM-X) &nbsp; [HuggingFace](https://huggingface.co/OpenDFM/ChemDFM-X-v1.0-13B) &nbsp; [ModelScope](https://modelscope.cn/models/OpenDFM/ChemDFM-X-v1.0-13B) ## Getting Started 1. Download ChemDFM-X model parameters from [HuggingFace](https://huggingface.co/OpenDFM/ChemDFM-X-v1.0-13B) or [ModelScope](https://modelscope.cn/models/OpenDFM/ChemDFM-X-v1.0-13B). 2. Download the demo codes from ChemDFM-X [GitHub](https://github.com/OpenDFM/ChemDFM-X) repository. *NOTE: Since ChemDFM-X is an MLLM for chemical modalities, the architecture is not standard LLM or VLM. It requires specific model definition and input preprocess.* 3. Install the required packages. The prefered enviroment is listed in requirements.txt. We strongly suggest installing PyTorch, PyTorch-Geometry, FlashAttention and Uni-Mol first before the other requirements in Python3.10. *NOTE: The version of CUDA and GLIBC on your machine may not support specific package version, that's why we suggest installing these packages first.* 4. Edit the installed package versions in requirements.txt by your own environments, and run `pip install -r requirements.txt`. ## Usage 1. Run the bash command to launch the command-line interactive demo. Please ensure your environment is activated. ```bash ./infer/scripts/interact.sh``` 2. Give instruction. 3. Give input text mixed with modality tokens (1 token for each file). 4. Give real file path to each of the modality token one by one. *NOTE: for batch infer, see the file [./example/C=COF.jsonl](https://github.com/OpenDFM/ChemDFM-X/blob/main/example/C%3DCOF.jsonl) and [./infer/infer_mm_raw.py#L414](https://github.com/OpenDFM/ChemDFM-X/blob/main/infer/infer_mm_raw.py#L414) for details.* The specital tokens for each modality is listed: | modality | modality token | file format | | :--- | :--- | :--- | | molecule **G**raph | [MM_FILE_G] | mol.sdf | | molecule **C**omformer | [MM_FILE_C] | mol.xyz | | molecule **I**mage | [MM_FILE_I] | mol.png | | **M**ass spectra | [MM_FILE_M] | mol.mgf | | inf**R**araed spectrum | [MM_FILE_R] | mol.csv | NOTE: We use the standard file formats to represent the modality data. Sometimes the SMILES is also included in the file format, which we don't use, it is OK to put a dummy SMILES in the file. ## Example More examples will be updated later. | instruction | input | mm_input_files | | :--- | :--- | :--- | | Would you please predict the SMILES notation that corresponds to the molecular figure? | **[MM_FILE_I]** | ./example/C=COF.png | | | | | | Would you please predict the SMILES notation that corresponds to the molecular tandem mass spectrometry? | **[MM_FILE_M]** | ./example/ms.mgf | | | | | | As a seasoned chemist, you have the SMILES notation with molecular graph of the identified reactants, reagents and products from an incomplete chemical reaction. It appears that some component or components in the products are missing. Using the information presented in the remaining parts of the reaction equation, could you make an educated guess about what these missing substances could be? Please confine your answer to the SMILES of the unknown molecule(s) and avoid incorporating any superfluous information. | SMILES of Reactants: CC(C)[Mg]Cl.CSc1c(F)cc(F)cc1Br.COB(OC)OC \n molecular graph of Reactants **[MM_FILE_G] [MM_FILE_G] [MM_FILE_G]**\nSMILES of Reagents: C1CCOC1\nmolecular graph of Reagents: **[MM_FILE_G]**\nSMILES of Products:\nmolecular graph of Products:\nSMILES of the absent products:\nAssistant:|CC(C)[Mg]Cl.sdf CSc1c(F)cc(F)cc1Br.sdf COB(OC)OC.sdf C1CCOC1.sdf | As an accomplished chemist, it's important to use your expertise in anticipating the chemical attributes to predict molecular features. When scrutinizing the molecular conformation of a chemical compound for the estimation of its molecular properties, make sure to retain the original format without infusing any additional data. Judge if the compound's composition has the potential to inhibit (Yes) or not inhibit (No) the Beta-site Amyloid Precursor Protein Cleaving Enzyme 1 (BACE1). Consider elements like molecular weight, number of atoms, types of bonds, and functional groups while examining the compound's potentiality as a viable drug and its probable effectiveness in curing Alzheimer's disease. Give a clear Yes or No answer. | molecular conformation: **[MM_FILE_C]** | ./example/C=COF.xyz | ## Citation If you use ChemDFM-X in your research or applications, please cite our work: ```bibtex @article{zhao2024chemdfmx, title={ChemDFM-X: towards large multimodal model for chemistry}, author={Zhao, Zihan and Chen, Bo and Li, Jingpiao and Chen, Lu and Wen, Liyang and Wang, Pengyu and Zhu, Zichen and Zhang, Danyang and Li, Yansi and Dai, Zhongyang and Chen, Xin and Yu, Kai}, journal={Science China Information Sciences}, volume={67}, number={12}, pages={220109}, year={2024}, doi={10.1007/s11432-024-4243-0} } ``` ## Disclaimer Current version of ChemDFM-X may generate incorrect or misleading information. Please use it with caution and verify the results with domain experts before making any decisions based on the results. ## Contact If you have any questions or further requests, please contact [Zihan Zhao](mailto:[email protected]), [Bo Chen](mailto:[email protected]) and [Lu Chen](mailto:[email protected]).
dandelion4/stella-Qwen3-14B
dandelion4
2025-04-30T06:20:08Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen3", "trl", "en", "base_model:unsloth/Qwen3-14B", "base_model:finetune:unsloth/Qwen3-14B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-04-30T06:19:42Z
--- base_model: unsloth/Qwen3-14B tags: - text-generation-inference - transformers - unsloth - qwen3 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** dandelion4 - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen3-14B This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
yhs0831/gemma-medical-qa-finetune
yhs0831
2025-04-30T06:18:27Z
0
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T04:52:59Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
FluffBaal/llama381binstruct_summarize_short_merged
FluffBaal
2025-04-30T06:14:47Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "sft", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-04-30T06:11:29Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
TianTianSuper/TableMaster-fork
TianTianSuper
2025-04-30T06:13:23Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-04-30T06:13:23Z
--- license: apache-2.0 ---
dandelion4/stella-Qwen2.5-3B
dandelion4
2025-04-30T06:08:54Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2", "trl", "en", "base_model:unsloth/Qwen2.5-3B", "base_model:finetune:unsloth/Qwen2.5-3B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-04-30T06:08:44Z
--- base_model: unsloth/Qwen2.5-3B tags: - text-generation-inference - transformers - unsloth - qwen2 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** dandelion4 - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-3B This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
TarunKM/AUTONOMIQ_simpleformat_5_Epochs_jsonl
TarunKM
2025-04-30T06:02:11Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-04-30T06:02:07Z
--- base_model: unsloth/llama-3.1-8b-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** TarunKM - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.1-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
YOYO-AI/Qwen2.5-14B-YOYO-V6-test2
YOYO-AI
2025-04-30T06:00:33Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "mergekit", "merge", "conversational", "base_model:Zhihu-ai/Zhi-writing-dsr1-14b", "base_model:merge:Zhihu-ai/Zhi-writing-dsr1-14b", "base_model:agentica-org/DeepCoder-14B-Preview", "base_model:merge:agentica-org/DeepCoder-14B-Preview", "base_model:mergekit-community/Qwen2.5-14B-della-1M-dpo", "base_model:merge:mergekit-community/Qwen2.5-14B-della-1M-dpo", "base_model:mergekit-community/Qwen2.5-14B-della-Nova-dpo", "base_model:merge:mergekit-community/Qwen2.5-14B-della-Nova-dpo", "base_model:mergekit-community/Qwen2.5-14B-della-V6-dpo", "base_model:merge:mergekit-community/Qwen2.5-14B-della-V6-dpo", "base_model:mergekit-community/Qwen2.5-14B-della-base-dpo", "base_model:merge:mergekit-community/Qwen2.5-14B-della-base-dpo", "base_model:mergekit-community/Qwen2.5-14B-della-code", "base_model:merge:mergekit-community/Qwen2.5-14B-della-code", "base_model:mergekit-community/Qwen2.5-14B-della-v2-dpo", "base_model:merge:mergekit-community/Qwen2.5-14B-della-v2-dpo", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T04:55:40Z
--- base_model: - mergekit-community/Qwen2.5-14B-della-V6-dpo - mergekit-community/Qwen2.5-14B-della-Nova-dpo - agentica-org/DeepCoder-14B-Preview - mergekit-community/Qwen2.5-14B-della-base-dpo - mergekit-community/Qwen2.5-14B-della-1M-dpo - Zhihu-ai/Zhi-writing-dsr1-14b - mergekit-community/Qwen2.5-14B-della-v2-dpo - mergekit-community/Qwen2.5-14B-della-code library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Karcher Mean](https://en.wikipedia.org/wiki/Karcher_mean) merge method using [mergekit-community/Qwen2.5-14B-della-1M-dpo](https://huggingface.co/mergekit-community/Qwen2.5-14B-della-1M-dpo) as a base. ### Models Merged The following models were included in the merge: * [mergekit-community/Qwen2.5-14B-della-V6-dpo](https://huggingface.co/mergekit-community/Qwen2.5-14B-della-V6-dpo) * [mergekit-community/Qwen2.5-14B-della-Nova-dpo](https://huggingface.co/mergekit-community/Qwen2.5-14B-della-Nova-dpo) * [agentica-org/DeepCoder-14B-Preview](https://huggingface.co/agentica-org/DeepCoder-14B-Preview) * [mergekit-community/Qwen2.5-14B-della-base-dpo](https://huggingface.co/mergekit-community/Qwen2.5-14B-della-base-dpo) * [Zhihu-ai/Zhi-writing-dsr1-14b](https://huggingface.co/Zhihu-ai/Zhi-writing-dsr1-14b) * [mergekit-community/Qwen2.5-14B-della-v2-dpo](https://huggingface.co/mergekit-community/Qwen2.5-14B-della-v2-dpo) * [mergekit-community/Qwen2.5-14B-della-code](https://huggingface.co/mergekit-community/Qwen2.5-14B-della-code) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: Zhihu-ai/Zhi-writing-dsr1-14b - model: agentica-org/DeepCoder-14B-Preview - model: mergekit-community/Qwen2.5-14B-della-code - model: mergekit-community/Qwen2.5-14B-della-v2-dpo - model: mergekit-community/Qwen2.5-14B-della-V6-dpo - model: mergekit-community/Qwen2.5-14B-della-Nova-dpo - model: mergekit-community/Qwen2.5-14B-della-base-dpo - model: mergekit-community/Qwen2.5-14B-della-1M-dpo merge_method: karcher base_model: mergekit-community/Qwen2.5-14B-della-1M-dpo parameters: max_iter: 1000 tokenizer_source: base dtype: float16 int8_mask: true normalize: true ```
Agnieszka1/Zora
Agnieszka1
2025-04-30T05:57:17Z
0
0
null
[ "license:other", "region:us" ]
null
2025-04-30T05:15:12Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md ---
hongseok729/gemma-medical-qa-finetune
hongseok729
2025-04-30T05:57:17Z
0
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T05:48:41Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
BIOMEDICA/BMC-smolvlm1-256M
BIOMEDICA
2025-04-30T05:55:45Z
0
0
null
[ "safetensors", "idefics3", "en", "dataset:BIOMEDICA/biomedica_webdataset_24M", "base_model:HuggingFaceTB/SmolVLM-256M-Instruct", "base_model:finetune:HuggingFaceTB/SmolVLM-256M-Instruct", "region:us" ]
null
2025-04-30T04:02:07Z
--- datasets: - BIOMEDICA/biomedica_webdataset_24M language: - en base_model: - HuggingFaceTB/SmolVLM-256M-Instruct --- <div align="center" style="margin-bottom: -20px;"> <img src="https://raw.githubusercontent.com/minwoosun/biomedica-etl/refs/heads/main/media/Biomedica-Isologo-sin-espacio-2025.png" alt="Pull Figure" width="300" /> </div> BMC-SmolVLM1 is a family of lightweight biomedical vision-language models (ranging from 256M to 2.2B parameters) based on SmolVLM. These models are designed for efficient multimodal understanding in the biomedical domain. Please ensure you are using a GPU runtime to run this notebook. Colab Tutorial: [![Colab Tutorial](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1Bg_pdLsXfHVX0U8AESL7TaiBQLDy2G7j?usp=sharing)
atokuw/distilhubert-finetuned-gtzan
atokuw
2025-04-30T05:53:19Z
0
0
transformers
[ "transformers", "safetensors", "hubert", "audio-classification", "generated_from_trainer", "dataset:marsyas/gtzan", "base_model:ntu-spml/distilhubert", "base_model:finetune:ntu-spml/distilhubert", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
audio-classification
2025-04-30T03:40:30Z
--- library_name: transformers license: apache-2.0 base_model: ntu-spml/distilhubert tags: - generated_from_trainer datasets: - marsyas/gtzan metrics: - accuracy model-index: - name: distilhubert-finetuned-gtzan results: - task: name: Audio Classification type: audio-classification dataset: name: GTZAN type: marsyas/gtzan metrics: - name: Accuracy type: accuracy value: 0.84 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilhubert-finetuned-gtzan This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset. It achieves the following results on the evaluation set: - Loss: 0.5418 - Accuracy: 0.84 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.9263 | 1.0 | 113 | 1.8569 | 0.5 | | 1.1988 | 2.0 | 226 | 1.2287 | 0.7 | | 1.0255 | 3.0 | 339 | 0.9869 | 0.73 | | 0.6431 | 4.0 | 452 | 0.8331 | 0.74 | | 0.4614 | 5.0 | 565 | 0.6698 | 0.83 | | 0.3791 | 6.0 | 678 | 0.5157 | 0.87 | | 0.2296 | 7.0 | 791 | 0.5229 | 0.86 | | 0.0998 | 8.0 | 904 | 0.6168 | 0.84 | | 0.1247 | 9.0 | 1017 | 0.5637 | 0.83 | | 0.0802 | 10.0 | 1130 | 0.5418 | 0.84 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Tokenizers 0.21.1
charlesyao2005/llama_sft_4
charlesyao2005
2025-04-30T05:52:32Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-04-30T05:52:19Z
--- base_model: unsloth/meta-llama-3.1-8b-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** charlesyao2005 - **License:** apache-2.0 - **Finetuned from model :** unsloth/meta-llama-3.1-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
xiaoyuanliu/Qwen2.5-3B-simplerl-ppo-offline.critique-100-6k
xiaoyuanliu
2025-04-30T05:50:56Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T05:46:15Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MaksimPro/Qwen2.5-7B-Instruct-merged1-Q4_K_M-GGUF
MaksimPro
2025-04-30T05:47:03Z
0
0
diffusers
[ "diffusers", "gguf", "text-to-image", "lora", "template:diffusion-lora", "llama-cpp", "gguf-my-repo", "base_model:MaksimPro/Qwen2.5-7B-Instruct-merged1", "base_model:adapter:MaksimPro/Qwen2.5-7B-Instruct-merged1", "endpoints_compatible", "region:us", "conversational" ]
text-to-image
2025-04-30T05:46:41Z
--- base_model: MaksimPro/Qwen2.5-7B-Instruct-merged1 tags: - text-to-image - lora - diffusers - template:diffusion-lora - llama-cpp - gguf-my-repo widget: - text: '-' output: url: images/hf-logo-with-title.png - text: '-' output: url: images/qwen_omni.png - text: '-' output: url: images/qwen_omni.png --- # MaksimPro/Qwen2.5-7B-Instruct-merged1-Q4_K_M-GGUF This model was converted to GGUF format from [`MaksimPro/Qwen2.5-7B-Instruct-merged1`](https://huggingface.co/MaksimPro/Qwen2.5-7B-Instruct-merged1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/MaksimPro/Qwen2.5-7B-Instruct-merged1) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo MaksimPro/Qwen2.5-7B-Instruct-merged1-Q4_K_M-GGUF --hf-file qwen2.5-7b-instruct-merged1-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo MaksimPro/Qwen2.5-7B-Instruct-merged1-Q4_K_M-GGUF --hf-file qwen2.5-7b-instruct-merged1-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo MaksimPro/Qwen2.5-7B-Instruct-merged1-Q4_K_M-GGUF --hf-file qwen2.5-7b-instruct-merged1-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo MaksimPro/Qwen2.5-7B-Instruct-merged1-Q4_K_M-GGUF --hf-file qwen2.5-7b-instruct-merged1-q4_k_m.gguf -c 2048 ```
MrRobotoAI/F4-Q4_K_M-GGUF
MrRobotoAI
2025-04-30T05:42:18Z
0
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:MrRobotoAI/F4", "base_model:quantized:MrRobotoAI/F4", "endpoints_compatible", "region:us", "conversational" ]
null
2025-04-30T05:41:53Z
--- base_model: MrRobotoAI/F4 library_name: transformers tags: - mergekit - merge - llama-cpp - gguf-my-repo --- # MrRobotoAI/F4-Q4_K_M-GGUF This model was converted to GGUF format from [`MrRobotoAI/F4`](https://huggingface.co/MrRobotoAI/F4) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/MrRobotoAI/F4) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo MrRobotoAI/F4-Q4_K_M-GGUF --hf-file f4-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo MrRobotoAI/F4-Q4_K_M-GGUF --hf-file f4-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo MrRobotoAI/F4-Q4_K_M-GGUF --hf-file f4-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo MrRobotoAI/F4-Q4_K_M-GGUF --hf-file f4-q4_k_m.gguf -c 2048 ```
Chidem/mistral-mini-finetuned-SWOW
Chidem
2025-04-30T05:41:39Z
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "base_model:quantized:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-04-30T05:40:29Z
--- base_model: unsloth/mistral-7b-instruct-v0.2-bnb-4bit tags: - text-generation-inference - transformers - unsloth - mistral - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Chidem - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
GilatToker/CV_T5
GilatToker
2025-04-30T05:40:13Z
0
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2025-04-30T05:39:23Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Carlosvirella100/LLMITO
Carlosvirella100
2025-04-30T05:39:05Z
0
0
null
[ "license:bigscience-openrail-m", "region:us" ]
null
2025-04-30T05:39:05Z
--- license: bigscience-openrail-m ---
MrRobotoAI/F3-Q4_K_M-GGUF
MrRobotoAI
2025-04-30T05:38:56Z
0
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:MrRobotoAI/F3", "base_model:quantized:MrRobotoAI/F3", "endpoints_compatible", "region:us", "conversational" ]
null
2025-04-30T05:38:34Z
--- base_model: MrRobotoAI/F3 library_name: transformers tags: - mergekit - merge - llama-cpp - gguf-my-repo --- # MrRobotoAI/F3-Q4_K_M-GGUF This model was converted to GGUF format from [`MrRobotoAI/F3`](https://huggingface.co/MrRobotoAI/F3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/MrRobotoAI/F3) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo MrRobotoAI/F3-Q4_K_M-GGUF --hf-file f3-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo MrRobotoAI/F3-Q4_K_M-GGUF --hf-file f3-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo MrRobotoAI/F3-Q4_K_M-GGUF --hf-file f3-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo MrRobotoAI/F3-Q4_K_M-GGUF --hf-file f3-q4_k_m.gguf -c 2048 ```
bkbj/modeltestA
bkbj
2025-04-30T05:38:45Z
0
0
espnet
[ "espnet", "en", "dataset:zwhe99/DeepMath-103K", "base_model:microsoft/bitnet-b1.58-2B-4T", "base_model:finetune:microsoft/bitnet-b1.58-2B-4T", "license:apache-2.0", "region:us" ]
null
2025-04-30T05:37:47Z
--- license: apache-2.0 datasets: - zwhe99/DeepMath-103K language: - en metrics: - cer base_model: - microsoft/bitnet-b1.58-2B-4T new_version: HiDream-ai/HiDream-I1-Full library_name: espnet ---
GilatToker/Violence_Deberta
GilatToker
2025-04-30T05:38:26Z
0
0
transformers
[ "transformers", "safetensors", "deberta", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-04-30T05:33:36Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kerncore/llama-3-swe
kerncore
2025-04-30T05:22:56Z
0
0
null
[ "safetensors", "llama", "merge", "mergekit", "lazymergekit", "AI-Sweden-Models/Llama-3-8B-instruct", "base_model:AI-Sweden-Models/Llama-3-8B-instruct", "base_model:finetune:AI-Sweden-Models/Llama-3-8B-instruct", "region:us" ]
null
2025-04-30T04:57:50Z
--- base_model: - AI-Sweden-Models/Llama-3-8B-instruct tags: - merge - mergekit - lazymergekit - AI-Sweden-Models/Llama-3-8B-instruct --- # NeuralDaredevil-8B-abliterated-x-Llama-3-8B-instruct NeuralDaredevil-8B-abliterated-x-Llama-3-8B-instruct is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [AI-Sweden-Models/Llama-3-8B-instruct](https://huggingface.co/AI-Sweden-Models/Llama-3-8B-instruct) ## 🧩 Configuration ```yaml models: - model: mlabonne/NeuralDaredevil-8B-abliterated # No parameters necessary for base model - model: AI-Sweden-Models/Llama-3-8B-instruct parameters: density: 0.53 weight: 0.6 merge_method: dare_ties base_model: mlabonne/NeuralDaredevil-8B-abliterated parameters: int8_mask: true dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "IsakNordgren/NeuralDaredevil-8B-abliterated-x-Llama-3-8B-instruct" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
briannaulriq/SlimorolKapselnDEATCH
briannaulriq
2025-04-30T05:20:39Z
0
0
null
[ "region:us" ]
null
2025-04-30T05:19:37Z
<p><strong>⇉⇉</strong><strong>&nbsp;Jetzt einkaufen&nbsp;</strong><strong>&rArr;</strong><strong>➧➧</strong>&nbsp;<a href="https://www.wellholistic.today/de/slimorol-kapseln/">https://www.wellholistic.today/de/slimorol-kapseln/</a></p> <p><strong>⇉⇉</strong><strong>&nbsp;Facebook-Link&nbsp;</strong><strong>&rArr;</strong><strong>➧➧</strong>&nbsp;<a href="https://www.facebook.com/groups/slimorolkapselnoffizielle">https://www.facebook.com/groups/slimorolkapselnoffizielle</a></p> <p><strong>Was sind Slimorol?</strong></p> <p><a href="https://www.wellholistic.today/de/slimorol-kapseln/">Slimorol</a>&nbsp;ist ein nat&uuml;rliches Nahrungserg&auml;nzungsmittel, das Menschen beim Abnehmen unterst&uuml;tzt. Es eignet sich f&uuml;r Menschen mit Stoffwechsel- und Energieproblemen und enth&auml;lt eine Mischung aus Wirkstoffen, die synergetisch wirken und so die Stoffwechselgesundheit verbessern. Die Kapseln sind besonders n&uuml;tzlich f&uuml;r Erwachsene mit chronischen Gewichtsproblemen und bieten im Vergleich zu herk&ouml;mmlichen Methoden eine saubere und effektive M&ouml;glichkeit zur Gewichtsabnahme.</p> <p>Die zentrale Philosophie von Slimorol ist, dass es beim nachhaltigen Gewichtsmanagement nicht so sehr ums Kalorienz&auml;hlen geht. Vielmehr erfordert es eine umfassende Strategie, die die zugrunde liegenden Ursachen des Stoffwechsels angeht. Mit seinen sorgf&auml;ltig ausgew&auml;hlten Inhaltsstoffen soll Slimorol den Stoffwechsel ankurbeln, das S&auml;ttigungsgef&uuml;hl steigern und den ganzen Tag &uuml;ber anhaltende Energie liefern.</p> <p><strong>&gt;&gt;&nbsp;<a href="https://www.wellholistic.today/Buy-Slimorol">Sonderangebot: Jetzt zum besten Preis kaufen</a></strong></p> <p><strong>Funktioniert Slimorol?</strong></p> <p>Die Wirksamkeit von&nbsp;<a href="https://www.wellholistic.today/de/slimorol-kapseln/">Slimorol</a>&nbsp;beruht auf seinem wissenschaftlich fundierten Ansatz. Jeder Inhaltsstoff wurde sorgf&auml;ltig ausgew&auml;hlt und basiert auf Forschungsergebnissen, die seinen m&ouml;glichen Nutzen f&uuml;r die Gewichtskontrolle und Stoffwechselunterst&uuml;tzung unterstreichen. Individuelle Ergebnisse k&ouml;nnen unterschiedlich ausfallen, aber die synergistische Wirkung dieser Inhaltsstoffe stellt einen wirksamen Mechanismus zur Unterst&uuml;tzung der Gewichtsabnahme dar.</p> <p>Gr&uuml;ntee-Extrakt, einer der Hauptbestandteile von Slimorol, ist daf&uuml;r bekannt, den Stoffwechsel anzukurbeln und die Fettverbrennung zu stimulieren. Studien zeigen, dass Catechine in gr&uuml;nem Tee den Stoffwechsel und die Fettverbrennung, insbesondere bei k&ouml;rperlicher Bet&auml;tigung, steigern k&ouml;nnen. Dies macht ihn zu einer guten Unterst&uuml;tzung f&uuml;r alle, die Sport in ihren Abnehmplan integrieren.</p> <p>Garcinia Cambogia, ein weiterer Hauptbestandteil, wurde umfassend auf seine appetitz&uuml;gelnde Wirkung untersucht. Studien zeigen, dass es den Hunger lindert und den Hei&szlig;hunger auf Snacks verringert, sodass Anwender ihren Di&auml;tplan einhalten k&ouml;nnen. Durch die Unterdr&uuml;ckung des Appetits hilft Slimorol Anwendern, die Kalorienaufnahme ohne Verzicht zu kontrollieren.</p> <p><a href="https://www.facebook.com/groups/slimorolkapselnoffizielle">https://www.facebook.com/groups/slimorolkapselnoffizielle</a></p> <p><a href="https://www.facebook.com/groups/slimorolkapselnbewertungen">https://www.facebook.com/groups/slimorolkapselnbewertungen</a></p> <p><a href="https://www.facebook.com/groups/slimorolkapselnoffizielle/posts/1048542923800511/">https://www.facebook.com/groups/slimorolkapselnoffizielle/posts/1048542923800511/</a></p> <p><a href="https://www.facebook.com/share/p/1HYmgV7rZA/">https://www.facebook.com/share/p/1HYmgV7rZA/</a></p> <p><a href="https://www.facebook.com/groups/slimorolkapselnbewertungen/posts/1240164277684594/">https://www.facebook.com/groups/slimorolkapselnbewertungen/posts/1240164277684594/</a></p> <p><a href="https://www.facebook.com/share/p/16TZiTpfM9/">https://www.facebook.com/share/p/16TZiTpfM9/</a></p> <p><a href="https://www.facebook.com/events/624387840649043/">https://www.facebook.com/events/624387840649043/</a></p> <p><a href="https://slimorolbewertungen.quora.com/Jetzt-einkaufen-Slimorol-Kapseln-zur-Gewichtsabnahme-Erfahrungen-Preis-Kauf-https-www-wellholistic-today-d">https://slimorolbewertungen.quora.com/Jetzt-einkaufen-Slimorol-Kapseln-zur-Gewichtsabnahme-Erfahrungen-Preis-Kauf-https-www-wellholistic-today-d</a></p> <p><a href="https://slimorolbewertungen.quora.com/">https://slimorolbewertungen.quora.com/</a></p> <p><a href="https://www.quora.com/Was-sind-Bewertungen-von-Slimorol-Kapseln/answer/Koby-Fullwoqq">https://www.quora.com/Was-sind-Bewertungen-von-Slimorol-Kapseln/answer/Koby-Fullwoqq</a></p> <p><a href="https://colab.research.google.com/drive/1ymoiOXRFRk_T-mzx25IY8vN3D75qF26k?usp=sharing">https://colab.research.google.com/drive/1ymoiOXRFRk_T-mzx25IY8vN3D75qF26k?usp=sharing</a></p> <p><a href="https://colab.research.google.com/drive/1dRf82QvlA2uJOS9qGxM5r6_xejZG5D2m?usp=sharing">https://colab.research.google.com/drive/1dRf82QvlA2uJOS9qGxM5r6_xejZG5D2m?usp=sharing</a></p> <p><a href="https://online.visual-paradigm.com/share/book/slimorol-kapseln-deutschland-und-preis--259exjvm6o">https://online.visual-paradigm.com/share/book/slimorol-kapseln-deutschland-und-preis--259exjvm6o</a></p> <p><a href="https://online.visual-paradigm.com/share/book/slimorol-bewertungen-preis-und-kauf-2025-259f0t9ose">https://online.visual-paradigm.com/share/book/slimorol-bewertungen-preis-und-kauf-2025-259f0t9ose</a></p> <p><a href="https://teeshopper.in/store/Slimorol-Offizielle-Angebote-und-Deals">https://teeshopper.in/store/Slimorol-Offizielle-Angebote-und-Deals</a></p> <p><a href="https://teeshopper.in/store/Slimorol-Fatburner-Deutschland-2025">https://teeshopper.in/store/Slimorol-Fatburner-Deutschland-2025</a></p> <p><a href="https://filmfreeway.com/SlimorolKapselnDeutschland">https://filmfreeway.com/SlimorolKapselnDeutschland</a></p> <p><a href="https://filmfreeway.com/SlimorolBewertungenPreisundKauf">https://filmfreeway.com/SlimorolBewertungenPreisundKauf</a>&nbsp;</p>
eoeosb/gemma-medical-qa-finetune
eoeosb
2025-04-30T05:19:02Z
0
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T05:10:16Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Arovi-Nusrat-Ridhi-Xn/wATCH.Arovi.Nusrat.Ridhi.Xn.viral.video.original
Arovi-Nusrat-Ridhi-Xn
2025-04-30T05:18:04Z
0
0
null
[ "region:us" ]
null
2025-04-30T05:16:50Z
[🌐 CLICK HERE 🟢==►► WATCH NOW](https://videohere.top/?V=Arovi-Nusrat-Ridhi) [🔴 CLICK HERE 🌐==►► Download Now)](https://videohere.top/?V=Arovi-Nusrat-Ridhi) [<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?V=Arovi-Nusrat-Ridhi)
guoanjie/dqn-SpaceInvadersNoFrameskip-v4
guoanjie
2025-04-30T05:16:25Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-04-30T05:15:55Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 500.50 +/- 170.55 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib SBX (SB3 + Jax): https://github.com/araffin/sbx Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga guoanjie -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga guoanjie -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga guoanjie ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
filipesantoscv11/c23ff790-e37e-4aea-9703-f9b0e32d77cc
filipesantoscv11
2025-04-30T05:15:38Z
0
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2-7B-Instruct", "base_model:adapter:unsloth/Qwen2-7B-Instruct", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-04-30T04:33:29Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2-7B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: c23ff790-e37e-4aea-9703-f9b0e32d77cc results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Qwen2-7B-Instruct bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - 867db9eee814c64e_train_data.json ds_type: json format: custom path: /workspace/input_data/867db9eee814c64e_train_data.json type: field_instruction: problem field_output: solution format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.5 group_by_length: false hub_model_id: filipesantoscv11/c23ff790-e37e-4aea-9703-f9b0e32d77cc hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-06 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/867db9eee814c64e_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: ab7a8a3e-97be-4132-b5ba-3fcbabe3e90d wandb_project: s56-6 wandb_run: your_name wandb_runid: ab7a8a3e-97be-4132-b5ba-3fcbabe3e90d warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # c23ff790-e37e-4aea-9703-f9b0e32d77cc This model is a fine-tuned version of [unsloth/Qwen2-7B-Instruct](https://huggingface.co/unsloth/Qwen2-7B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5317 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.4965 | 0.0191 | 200 | 0.5317 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
mradermacher/qwen2.5-reinstruct-alternate-lumen-14B-GGUF
mradermacher
2025-04-30T05:15:21Z
124
1
transformers
[ "transformers", "gguf", "mergekit", "merge", "zho", "eng", "fra", "spa", "por", "deu", "ita", "rus", "jpn", "kor", "vie", "tha", "ara", "base_model:Lambent/qwen2.5-reinstruct-alternate-lumen-14B", "base_model:quantized:Lambent/qwen2.5-reinstruct-alternate-lumen-14B", "endpoints_compatible", "region:us", "conversational" ]
null
2024-09-24T22:12:16Z
--- base_model: Lambent/qwen2.5-reinstruct-alternate-lumen-14B language: - zho - eng - fra - spa - por - deu - ita - rus - jpn - kor - vie - tha - ara library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Lambent/qwen2.5-reinstruct-alternate-lumen-14B <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/qwen2.5-reinstruct-alternate-lumen-14B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/qwen2.5-reinstruct-alternate-lumen-14B-GGUF/resolve/main/qwen2.5-reinstruct-alternate-lumen-14B.Q2_K.gguf) | Q2_K | 5.9 | | | [GGUF](https://huggingface.co/mradermacher/qwen2.5-reinstruct-alternate-lumen-14B-GGUF/resolve/main/qwen2.5-reinstruct-alternate-lumen-14B.IQ3_XS.gguf) | IQ3_XS | 6.5 | | | [GGUF](https://huggingface.co/mradermacher/qwen2.5-reinstruct-alternate-lumen-14B-GGUF/resolve/main/qwen2.5-reinstruct-alternate-lumen-14B.Q3_K_S.gguf) | Q3_K_S | 6.8 | | | [GGUF](https://huggingface.co/mradermacher/qwen2.5-reinstruct-alternate-lumen-14B-GGUF/resolve/main/qwen2.5-reinstruct-alternate-lumen-14B.IQ3_S.gguf) | IQ3_S | 6.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/qwen2.5-reinstruct-alternate-lumen-14B-GGUF/resolve/main/qwen2.5-reinstruct-alternate-lumen-14B.IQ3_M.gguf) | IQ3_M | 7.0 | | | [GGUF](https://huggingface.co/mradermacher/qwen2.5-reinstruct-alternate-lumen-14B-GGUF/resolve/main/qwen2.5-reinstruct-alternate-lumen-14B.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/qwen2.5-reinstruct-alternate-lumen-14B-GGUF/resolve/main/qwen2.5-reinstruct-alternate-lumen-14B.Q3_K_L.gguf) | Q3_K_L | 8.0 | | | [GGUF](https://huggingface.co/mradermacher/qwen2.5-reinstruct-alternate-lumen-14B-GGUF/resolve/main/qwen2.5-reinstruct-alternate-lumen-14B.IQ4_XS.gguf) | IQ4_XS | 8.3 | | | [GGUF](https://huggingface.co/mradermacher/qwen2.5-reinstruct-alternate-lumen-14B-GGUF/resolve/main/qwen2.5-reinstruct-alternate-lumen-14B.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/qwen2.5-reinstruct-alternate-lumen-14B-GGUF/resolve/main/qwen2.5-reinstruct-alternate-lumen-14B.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/qwen2.5-reinstruct-alternate-lumen-14B-GGUF/resolve/main/qwen2.5-reinstruct-alternate-lumen-14B.Q5_K_S.gguf) | Q5_K_S | 10.4 | | | [GGUF](https://huggingface.co/mradermacher/qwen2.5-reinstruct-alternate-lumen-14B-GGUF/resolve/main/qwen2.5-reinstruct-alternate-lumen-14B.Q5_K_M.gguf) | Q5_K_M | 10.6 | | | [GGUF](https://huggingface.co/mradermacher/qwen2.5-reinstruct-alternate-lumen-14B-GGUF/resolve/main/qwen2.5-reinstruct-alternate-lumen-14B.Q6_K.gguf) | Q6_K | 12.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/qwen2.5-reinstruct-alternate-lumen-14B-GGUF/resolve/main/qwen2.5-reinstruct-alternate-lumen-14B.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
cilantro9246/gemma2-v1-6
cilantro9246
2025-04-30T05:13:29Z
0
0
transformers
[ "transformers", "safetensors", "gemma3", "image-text-to-text", "gemma", "google", "Bifröst", "Bifrost", "code", "text-generation", "conversational", "base_model:google/gemma-3-27b-it", "base_model:finetune:google/gemma-3-27b-it", "license:gemma", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T05:13:25Z
--- license: gemma library_name: transformers pipeline_tag: text-generation extra_gated_heading: Access Gemma on Hugging Face extra_gated_prompt: >- To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging Face and click below. Requests are processed immediately. extra_gated_button_content: Acknowledge license base_model: google/gemma-3-27b-it tags: - transformers - gemma3 - gemma - google - Bifröst - Bifrost - code --- ## Bifröst-27B ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64a834a8895fd6416e29576f/sAXfe0cQdULI_GEVxBstw.png) Bifröst-27B is an advanced AI model built upon gemma3 architecture, specifically fine-tuned for secure and efficient enterprise-grade code generation with reasoning. Designed to meet rigorous standards of safety, accuracy, and reliability, Bifröst empowers organizations to streamline software development workflows while prioritizing security and compliance. ### Model Details - **Model Name:** Bifröst-27B - **Base Architecture:** gemma3 - **Application:** Enterprise Secure Code Generation - **Release Date:** 16-March-2025 ### Intended Use Bifröst is designed explicitly for: - Generating secure, efficient, and high-quality code. - Supporting development tasks within regulated enterprise environments. - Enhancing productivity by automating routine coding tasks without compromising security. ### Features - **Security-Focused Training:** Specialized training regimen emphasizing secure coding practices, vulnerability reduction, and adherence to security standards. - **Enterprise-Optimized Performance:** Tailored to support various programming languages and enterprise frameworks with robust, context-aware suggestions. - **Compliance-Driven Design:** Incorporates features to aid in maintaining compliance with industry-specific standards (e.g., GDPR, HIPAA, SOC 2). ### Limitations - Bifröst should be used under human supervision to ensure code correctness and security compliance. - Model-generated code should undergo appropriate security and quality assurance checks before deployment. ### Ethical Considerations - Users are encouraged to perform regular audits and compliance checks on generated outputs. - Enterprises should implement responsible AI practices to mitigate biases or unintended consequences. ### Usage Below are some quick-start instructions for using the model with the `transformers` library. #### Installation ```sh $ pip install git+https://github.com/huggingface/[email protected] ``` #### Running with the `pipeline` API ```python from transformers import pipeline import torch pipe = pipeline( "text-generation", model="OpenGenerativeAI/Bifrost-27B", device="cuda", torch_dtype=torch.bfloat16 ) messages = [{"role": "user", "content": "Generate a secure API key management system."}] output = pipe(text=messages, max_new_tokens=200) print(output[0]["generated_text"]) ``` ## Terms of Use This model is released under the **Gemma license**. Users must comply with [Google's Gemma Terms of Use](https://ai.google.dev/gemma/terms), including restrictions on redistribution, modification, and commercial use.
cilantro9246/gemma2-v1-4
cilantro9246
2025-04-30T05:13:19Z
0
0
transformers
[ "transformers", "safetensors", "gemma3", "image-text-to-text", "gemma", "google", "Bifröst", "Bifrost", "code", "text-generation", "conversational", "base_model:google/gemma-3-27b-it", "base_model:finetune:google/gemma-3-27b-it", "license:gemma", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T05:13:15Z
--- license: gemma library_name: transformers pipeline_tag: text-generation extra_gated_heading: Access Gemma on Hugging Face extra_gated_prompt: >- To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging Face and click below. Requests are processed immediately. extra_gated_button_content: Acknowledge license base_model: google/gemma-3-27b-it tags: - transformers - gemma3 - gemma - google - Bifröst - Bifrost - code --- ## Bifröst-27B ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64a834a8895fd6416e29576f/sAXfe0cQdULI_GEVxBstw.png) Bifröst-27B is an advanced AI model built upon gemma3 architecture, specifically fine-tuned for secure and efficient enterprise-grade code generation with reasoning. Designed to meet rigorous standards of safety, accuracy, and reliability, Bifröst empowers organizations to streamline software development workflows while prioritizing security and compliance. ### Model Details - **Model Name:** Bifröst-27B - **Base Architecture:** gemma3 - **Application:** Enterprise Secure Code Generation - **Release Date:** 16-March-2025 ### Intended Use Bifröst is designed explicitly for: - Generating secure, efficient, and high-quality code. - Supporting development tasks within regulated enterprise environments. - Enhancing productivity by automating routine coding tasks without compromising security. ### Features - **Security-Focused Training:** Specialized training regimen emphasizing secure coding practices, vulnerability reduction, and adherence to security standards. - **Enterprise-Optimized Performance:** Tailored to support various programming languages and enterprise frameworks with robust, context-aware suggestions. - **Compliance-Driven Design:** Incorporates features to aid in maintaining compliance with industry-specific standards (e.g., GDPR, HIPAA, SOC 2). ### Limitations - Bifröst should be used under human supervision to ensure code correctness and security compliance. - Model-generated code should undergo appropriate security and quality assurance checks before deployment. ### Ethical Considerations - Users are encouraged to perform regular audits and compliance checks on generated outputs. - Enterprises should implement responsible AI practices to mitigate biases or unintended consequences. ### Usage Below are some quick-start instructions for using the model with the `transformers` library. #### Installation ```sh $ pip install git+https://github.com/huggingface/[email protected] ``` #### Running with the `pipeline` API ```python from transformers import pipeline import torch pipe = pipeline( "text-generation", model="OpenGenerativeAI/Bifrost-27B", device="cuda", torch_dtype=torch.bfloat16 ) messages = [{"role": "user", "content": "Generate a secure API key management system."}] output = pipe(text=messages, max_new_tokens=200) print(output[0]["generated_text"]) ``` ## Terms of Use This model is released under the **Gemma license**. Users must comply with [Google's Gemma Terms of Use](https://ai.google.dev/gemma/terms), including restrictions on redistribution, modification, and commercial use.
cilantro9246/gemma2-v1-3
cilantro9246
2025-04-30T05:13:14Z
0
0
transformers
[ "transformers", "safetensors", "gemma3", "image-text-to-text", "gemma", "google", "Bifröst", "Bifrost", "code", "text-generation", "conversational", "base_model:google/gemma-3-27b-it", "base_model:finetune:google/gemma-3-27b-it", "license:gemma", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T05:13:11Z
--- license: gemma library_name: transformers pipeline_tag: text-generation extra_gated_heading: Access Gemma on Hugging Face extra_gated_prompt: >- To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging Face and click below. Requests are processed immediately. extra_gated_button_content: Acknowledge license base_model: google/gemma-3-27b-it tags: - transformers - gemma3 - gemma - google - Bifröst - Bifrost - code --- ## Bifröst-27B ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64a834a8895fd6416e29576f/sAXfe0cQdULI_GEVxBstw.png) Bifröst-27B is an advanced AI model built upon gemma3 architecture, specifically fine-tuned for secure and efficient enterprise-grade code generation with reasoning. Designed to meet rigorous standards of safety, accuracy, and reliability, Bifröst empowers organizations to streamline software development workflows while prioritizing security and compliance. ### Model Details - **Model Name:** Bifröst-27B - **Base Architecture:** gemma3 - **Application:** Enterprise Secure Code Generation - **Release Date:** 16-March-2025 ### Intended Use Bifröst is designed explicitly for: - Generating secure, efficient, and high-quality code. - Supporting development tasks within regulated enterprise environments. - Enhancing productivity by automating routine coding tasks without compromising security. ### Features - **Security-Focused Training:** Specialized training regimen emphasizing secure coding practices, vulnerability reduction, and adherence to security standards. - **Enterprise-Optimized Performance:** Tailored to support various programming languages and enterprise frameworks with robust, context-aware suggestions. - **Compliance-Driven Design:** Incorporates features to aid in maintaining compliance with industry-specific standards (e.g., GDPR, HIPAA, SOC 2). ### Limitations - Bifröst should be used under human supervision to ensure code correctness and security compliance. - Model-generated code should undergo appropriate security and quality assurance checks before deployment. ### Ethical Considerations - Users are encouraged to perform regular audits and compliance checks on generated outputs. - Enterprises should implement responsible AI practices to mitigate biases or unintended consequences. ### Usage Below are some quick-start instructions for using the model with the `transformers` library. #### Installation ```sh $ pip install git+https://github.com/huggingface/[email protected] ``` #### Running with the `pipeline` API ```python from transformers import pipeline import torch pipe = pipeline( "text-generation", model="OpenGenerativeAI/Bifrost-27B", device="cuda", torch_dtype=torch.bfloat16 ) messages = [{"role": "user", "content": "Generate a secure API key management system."}] output = pipe(text=messages, max_new_tokens=200) print(output[0]["generated_text"]) ``` ## Terms of Use This model is released under the **Gemma license**. Users must comply with [Google's Gemma Terms of Use](https://ai.google.dev/gemma/terms), including restrictions on redistribution, modification, and commercial use.
cilantro9246/gemma2-v1-2
cilantro9246
2025-04-30T05:13:10Z
0
0
transformers
[ "transformers", "safetensors", "gemma3", "image-text-to-text", "gemma", "google", "Bifröst", "Bifrost", "code", "text-generation", "conversational", "base_model:google/gemma-3-27b-it", "base_model:finetune:google/gemma-3-27b-it", "license:gemma", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T05:13:06Z
--- license: gemma library_name: transformers pipeline_tag: text-generation extra_gated_heading: Access Gemma on Hugging Face extra_gated_prompt: >- To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging Face and click below. Requests are processed immediately. extra_gated_button_content: Acknowledge license base_model: google/gemma-3-27b-it tags: - transformers - gemma3 - gemma - google - Bifröst - Bifrost - code --- ## Bifröst-27B ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64a834a8895fd6416e29576f/sAXfe0cQdULI_GEVxBstw.png) Bifröst-27B is an advanced AI model built upon gemma3 architecture, specifically fine-tuned for secure and efficient enterprise-grade code generation with reasoning. Designed to meet rigorous standards of safety, accuracy, and reliability, Bifröst empowers organizations to streamline software development workflows while prioritizing security and compliance. ### Model Details - **Model Name:** Bifröst-27B - **Base Architecture:** gemma3 - **Application:** Enterprise Secure Code Generation - **Release Date:** 16-March-2025 ### Intended Use Bifröst is designed explicitly for: - Generating secure, efficient, and high-quality code. - Supporting development tasks within regulated enterprise environments. - Enhancing productivity by automating routine coding tasks without compromising security. ### Features - **Security-Focused Training:** Specialized training regimen emphasizing secure coding practices, vulnerability reduction, and adherence to security standards. - **Enterprise-Optimized Performance:** Tailored to support various programming languages and enterprise frameworks with robust, context-aware suggestions. - **Compliance-Driven Design:** Incorporates features to aid in maintaining compliance with industry-specific standards (e.g., GDPR, HIPAA, SOC 2). ### Limitations - Bifröst should be used under human supervision to ensure code correctness and security compliance. - Model-generated code should undergo appropriate security and quality assurance checks before deployment. ### Ethical Considerations - Users are encouraged to perform regular audits and compliance checks on generated outputs. - Enterprises should implement responsible AI practices to mitigate biases or unintended consequences. ### Usage Below are some quick-start instructions for using the model with the `transformers` library. #### Installation ```sh $ pip install git+https://github.com/huggingface/[email protected] ``` #### Running with the `pipeline` API ```python from transformers import pipeline import torch pipe = pipeline( "text-generation", model="OpenGenerativeAI/Bifrost-27B", device="cuda", torch_dtype=torch.bfloat16 ) messages = [{"role": "user", "content": "Generate a secure API key management system."}] output = pipe(text=messages, max_new_tokens=200) print(output[0]["generated_text"]) ``` ## Terms of Use This model is released under the **Gemma license**. Users must comply with [Google's Gemma Terms of Use](https://ai.google.dev/gemma/terms), including restrictions on redistribution, modification, and commercial use.
magnifi/Phi3_intent_v60_1_w_unknown_4_lr_0.002
magnifi
2025-04-30T05:12:53Z
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit", "base_model:finetune:unsloth/Phi-3-mini-4k-instruct-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T05:10:44Z
--- base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - mistral - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** magnifi - **License:** apache-2.0 - **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
unrented5443/sn11-v3-2-12
unrented5443
2025-04-30T05:10:33Z
0
0
transformers
[ "transformers", "safetensors", "gemma3", "image-text-to-text", "gemma", "google", "Bifröst", "Bifrost", "code", "text-generation", "conversational", "base_model:google/gemma-3-27b-it", "base_model:finetune:google/gemma-3-27b-it", "license:gemma", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T05:10:30Z
--- license: gemma library_name: transformers pipeline_tag: text-generation extra_gated_heading: Access Gemma on Hugging Face extra_gated_prompt: >- To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging Face and click below. Requests are processed immediately. extra_gated_button_content: Acknowledge license base_model: google/gemma-3-27b-it tags: - transformers - gemma3 - gemma - google - Bifröst - Bifrost - code --- ## Bifröst-27B ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64a834a8895fd6416e29576f/sAXfe0cQdULI_GEVxBstw.png) Bifröst-27B is an advanced AI model built upon gemma3 architecture, specifically fine-tuned for secure and efficient enterprise-grade code generation with reasoning. Designed to meet rigorous standards of safety, accuracy, and reliability, Bifröst empowers organizations to streamline software development workflows while prioritizing security and compliance. ### Model Details - **Model Name:** Bifröst-27B - **Base Architecture:** gemma3 - **Application:** Enterprise Secure Code Generation - **Release Date:** 16-March-2025 ### Intended Use Bifröst is designed explicitly for: - Generating secure, efficient, and high-quality code. - Supporting development tasks within regulated enterprise environments. - Enhancing productivity by automating routine coding tasks without compromising security. ### Features - **Security-Focused Training:** Specialized training regimen emphasizing secure coding practices, vulnerability reduction, and adherence to security standards. - **Enterprise-Optimized Performance:** Tailored to support various programming languages and enterprise frameworks with robust, context-aware suggestions. - **Compliance-Driven Design:** Incorporates features to aid in maintaining compliance with industry-specific standards (e.g., GDPR, HIPAA, SOC 2). ### Limitations - Bifröst should be used under human supervision to ensure code correctness and security compliance. - Model-generated code should undergo appropriate security and quality assurance checks before deployment. ### Ethical Considerations - Users are encouraged to perform regular audits and compliance checks on generated outputs. - Enterprises should implement responsible AI practices to mitigate biases or unintended consequences. ### Usage Below are some quick-start instructions for using the model with the `transformers` library. #### Installation ```sh $ pip install git+https://github.com/huggingface/[email protected] ``` #### Running with the `pipeline` API ```python from transformers import pipeline import torch pipe = pipeline( "text-generation", model="OpenGenerativeAI/Bifrost-27B", device="cuda", torch_dtype=torch.bfloat16 ) messages = [{"role": "user", "content": "Generate a secure API key management system."}] output = pipe(text=messages, max_new_tokens=200) print(output[0]["generated_text"]) ``` ## Terms of Use This model is released under the **Gemma license**. Users must comply with [Google's Gemma Terms of Use](https://ai.google.dev/gemma/terms), including restrictions on redistribution, modification, and commercial use.
unrented5443/sn11-v3-2-15
unrented5443
2025-04-30T05:10:20Z
0
0
transformers
[ "transformers", "safetensors", "gemma3", "image-text-to-text", "gemma", "google", "Bifröst", "Bifrost", "code", "text-generation", "conversational", "base_model:google/gemma-3-27b-it", "base_model:finetune:google/gemma-3-27b-it", "license:gemma", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T05:10:16Z
--- license: gemma library_name: transformers pipeline_tag: text-generation extra_gated_heading: Access Gemma on Hugging Face extra_gated_prompt: >- To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging Face and click below. Requests are processed immediately. extra_gated_button_content: Acknowledge license base_model: google/gemma-3-27b-it tags: - transformers - gemma3 - gemma - google - Bifröst - Bifrost - code --- ## Bifröst-27B ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64a834a8895fd6416e29576f/sAXfe0cQdULI_GEVxBstw.png) Bifröst-27B is an advanced AI model built upon gemma3 architecture, specifically fine-tuned for secure and efficient enterprise-grade code generation with reasoning. Designed to meet rigorous standards of safety, accuracy, and reliability, Bifröst empowers organizations to streamline software development workflows while prioritizing security and compliance. ### Model Details - **Model Name:** Bifröst-27B - **Base Architecture:** gemma3 - **Application:** Enterprise Secure Code Generation - **Release Date:** 16-March-2025 ### Intended Use Bifröst is designed explicitly for: - Generating secure, efficient, and high-quality code. - Supporting development tasks within regulated enterprise environments. - Enhancing productivity by automating routine coding tasks without compromising security. ### Features - **Security-Focused Training:** Specialized training regimen emphasizing secure coding practices, vulnerability reduction, and adherence to security standards. - **Enterprise-Optimized Performance:** Tailored to support various programming languages and enterprise frameworks with robust, context-aware suggestions. - **Compliance-Driven Design:** Incorporates features to aid in maintaining compliance with industry-specific standards (e.g., GDPR, HIPAA, SOC 2). ### Limitations - Bifröst should be used under human supervision to ensure code correctness and security compliance. - Model-generated code should undergo appropriate security and quality assurance checks before deployment. ### Ethical Considerations - Users are encouraged to perform regular audits and compliance checks on generated outputs. - Enterprises should implement responsible AI practices to mitigate biases or unintended consequences. ### Usage Below are some quick-start instructions for using the model with the `transformers` library. #### Installation ```sh $ pip install git+https://github.com/huggingface/[email protected] ``` #### Running with the `pipeline` API ```python from transformers import pipeline import torch pipe = pipeline( "text-generation", model="OpenGenerativeAI/Bifrost-27B", device="cuda", torch_dtype=torch.bfloat16 ) messages = [{"role": "user", "content": "Generate a secure API key management system."}] output = pipe(text=messages, max_new_tokens=200) print(output[0]["generated_text"]) ``` ## Terms of Use This model is released under the **Gemma license**. Users must comply with [Google's Gemma Terms of Use](https://ai.google.dev/gemma/terms), including restrictions on redistribution, modification, and commercial use.
0xtinuviel/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-robust_lightfooted_moose
0xtinuviel
2025-04-30T05:09:52Z
17
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am robust lightfooted moose", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-13T02:01:55Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-robust_lightfooted_moose tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am robust lightfooted moose - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-robust_lightfooted_moose This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="0xtinuviel/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-robust_lightfooted_moose", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.2 - Pytorch: 2.5.1 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
hyungenie/gemma-medical-qa-finetune
hyungenie
2025-04-30T05:06:14Z
0
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T04:57:27Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ivar26/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-whiskered_mute_cassowary
ivar26
2025-04-30T05:05:47Z
8
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am whiskered mute cassowary", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-18T15:20:36Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-whiskered_mute_cassowary tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am whiskered mute cassowary - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-whiskered_mute_cassowary This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="ivar26/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-whiskered_mute_cassowary", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Flo0620/Qwen2_5-VL-7B-8bit_SpiQA
Flo0620
2025-04-30T05:02:04Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:Qwen/Qwen2.5-VL-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-04-27T21:12:35Z
--- base_model: Qwen/Qwen2.5-VL-7B-Instruct library_name: transformers model_name: Qwen2_5-VL-7B-8bit_SpiQA tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for Qwen2_5-VL-7B-8bit_SpiQA This model is a fine-tuned version of [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Flo0620/Qwen2_5-VL-7B-8bit_SpiQA", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.15.2 - Transformers: 4.52.0.dev0 - Pytorch: 2.6.0+cu124 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
taobao-mnn/MiMo-7B-RL-Zero-MNN
taobao-mnn
2025-04-30T04:58:48Z
0
0
null
[ "chat", "text-generation", "en", "license:apache-2.0", "region:us" ]
text-generation
2025-04-30T04:54:27Z
--- license: apache-2.0 language: - en pipeline_tag: text-generation tags: - chat --- # MiMo-7B-RL-Zero-MNN ## Introduction This model is a 4-bit quantized version of the MNN model exported from MiMo-7B-RL-Zero using [llmexport](https://github.com/alibaba/MNN/tree/master/transformers/llm/export). ## Download ```bash # install huggingface pip install huggingface ``` ```bash # shell download huggingface download --model 'taobao-mnn/MiMo-7B-RL-Zero-MNN' --local_dir 'path/to/dir' ``` ```python # SDK download from huggingface_hub import snapshot_download model_dir = snapshot_download('taobao-mnn/MiMo-7B-RL-Zero-MNN') ``` ```bash # git clone git clone https://www.modelscope.cn/taobao-mnn/MiMo-7B-RL-Zero-MNN ``` ## Usage ```bash # clone MNN source git clone https://github.com/alibaba/MNN.git # compile cd MNN mkdir build && cd build cmake .. -DMNN_LOW_MEMORY=true -DMNN_CPU_WEIGHT_DEQUANT_GEMM=true -DMNN_BUILD_LLM=true -DMNN_SUPPORT_TRANSFORMER_FUSE=true make -j # run ./llm_demo /path/to/MiMo-7B-RL-Zero-MNN/config.json prompt.txt ``` ## Document [MNN-LLM](https://mnn-docs.readthedocs.io/en/latest/transformers/llm.html#)
MJAEEEEE/gemma-medical-qa-finetune
MJAEEEEE
2025-04-30T04:52:40Z
0
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T04:47:14Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
PQPQPQHUST/Llama-3.2-1B-Instruct
PQPQPQHUST
2025-04-30T04:49:24Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-04-30T04:49:17Z
--- base_model: unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** PQPQPQHUST - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)