modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-28 06:27:35
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
500 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-28 06:24:42
card
stringlengths
11
1.01M
mradermacher/hayat-v1-GGUF
mradermacher
2025-05-03T17:30:12Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:ninja/hayat-v1", "base_model:quantized:ninja/hayat-v1", "endpoints_compatible", "region:us" ]
null
2025-05-03T17:29:09Z
--- base_model: ninja/hayat-v1 language: - en library_name: transformers quantized_by: mradermacher tags: [] --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/ninja/hayat-v1 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/hayat-v1-GGUF/resolve/main/hayat-v1.Q2_K.gguf) | Q2_K | 0.1 | | | [GGUF](https://huggingface.co/mradermacher/hayat-v1-GGUF/resolve/main/hayat-v1.Q3_K_S.gguf) | Q3_K_S | 0.1 | | | [GGUF](https://huggingface.co/mradermacher/hayat-v1-GGUF/resolve/main/hayat-v1.Q3_K_M.gguf) | Q3_K_M | 0.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/hayat-v1-GGUF/resolve/main/hayat-v1.Q3_K_L.gguf) | Q3_K_L | 0.1 | | | [GGUF](https://huggingface.co/mradermacher/hayat-v1-GGUF/resolve/main/hayat-v1.IQ4_XS.gguf) | IQ4_XS | 0.1 | | | [GGUF](https://huggingface.co/mradermacher/hayat-v1-GGUF/resolve/main/hayat-v1.Q4_K_S.gguf) | Q4_K_S | 0.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/hayat-v1-GGUF/resolve/main/hayat-v1.Q4_K_M.gguf) | Q4_K_M | 0.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/hayat-v1-GGUF/resolve/main/hayat-v1.Q5_K_S.gguf) | Q5_K_S | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/hayat-v1-GGUF/resolve/main/hayat-v1.Q5_K_M.gguf) | Q5_K_M | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/hayat-v1-GGUF/resolve/main/hayat-v1.Q6_K.gguf) | Q6_K | 0.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/hayat-v1-GGUF/resolve/main/hayat-v1.Q8_0.gguf) | Q8_0 | 0.2 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/hayat-v1-GGUF/resolve/main/hayat-v1.f16.gguf) | f16 | 0.3 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
fats-fme/76a35825-b79e-4b60-87d3-4641e40e4228
fats-fme
2025-05-03T17:26:52Z
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/SmolLM2-360M", "base_model:adapter:unsloth/SmolLM2-360M", "license:apache-2.0", "region:us" ]
null
2025-05-03T17:19:46Z
--- library_name: peft license: apache-2.0 base_model: unsloth/SmolLM2-360M tags: - axolotl - generated_from_trainer model-index: - name: 76a35825-b79e-4b60-87d3-4641e40e4228 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/SmolLM2-360M bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 9c46256a8024748c_train_data.json ds_type: json format: custom path: /workspace/input_data/9c46256a8024748c_train_data.json type: field_input: text field_instruction: instruction field_output: full_instruction format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto early_stopping_patience: 3 eval_max_new_tokens: 128 eval_steps: 100 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: false hub_model_id: fats-fme/76a35825-b79e-4b60-87d3-4641e40e4228 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 5.0e-05 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 128 lora_dropout: 0.1 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_memory: 0: 130GB max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/9c46256a8024748c_train_data.json model_type: AutoModelForCausalLM num_epochs: 10 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 100 saves_per_epoch: null sequence_len: 2048 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 62e2631f-f12b-4415-81fd-ab3b8d09115c wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 62e2631f-f12b-4415-81fd-ab3b8d09115c warmup_steps: 200 weight_decay: 0.01 xformers_attention: null ``` </details><br> # 76a35825-b79e-4b60-87d3-4641e40e4228 This model is a fine-tuned version of [unsloth/SmolLM2-360M](https://huggingface.co/unsloth/SmolLM2-360M) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0252 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 200 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0002 | 1 | 1.3539 | | 0.2766 | 0.0214 | 100 | 0.2475 | | 0.0211 | 0.0427 | 200 | 0.0252 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
sergioalves/d915b428-af4f-4161-9679-18f5c67bbec6
sergioalves
2025-05-03T17:23:36Z
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/SmolLM2-360M", "base_model:adapter:unsloth/SmolLM2-360M", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-05-03T17:19:39Z
--- library_name: peft license: apache-2.0 base_model: unsloth/SmolLM2-360M tags: - axolotl - generated_from_trainer model-index: - name: d915b428-af4f-4161-9679-18f5c67bbec6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml absolute_data_files: true adapter: lora base_model: unsloth/SmolLM2-360M bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - 9c46256a8024748c_train_data.json ds_type: json format: custom path: /workspace/input_data/9c46256a8024748c_train_data.json type: field_input: text field_instruction: instruction field_output: full_instruction format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.5 group_by_length: false hub_model_id: sergioalves/d915b428-af4f-4161-9679-18f5c67bbec6 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-06 load_in_4bit: false load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/9c46256a8024748c_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 62e2631f-f12b-4415-81fd-ab3b8d09115c wandb_project: s56-8 wandb_run: your_name wandb_runid: 62e2631f-f12b-4415-81fd-ab3b8d09115c warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # d915b428-af4f-4161-9679-18f5c67bbec6 This model is a fine-tuned version of [unsloth/SmolLM2-360M](https://huggingface.co/unsloth/SmolLM2-360M) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0833 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.9386 | 0.0427 | 200 | 1.0833 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
ZhuangXialie/Qwen-code-7B-SFT-100k-v2-cot-v2
ZhuangXialie
2025-05-03T17:23:25Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "open-r1", "trl", "sft", "conversational", "dataset:local", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T13:53:28Z
--- datasets: local library_name: transformers model_name: Qwen-code-7B-SFT-100k-v2-cot-v2 tags: - generated_from_trainer - open-r1 - trl - sft licence: license --- # Model Card for Qwen-code-7B-SFT-100k-v2-cot-v2 This model is a fine-tuned version of [None](https://huggingface.co/None) on the [local](https://huggingface.co/datasets/local) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="ZhuangXialie/Qwen-code-7B-SFT-100k-v2-cot-v2", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dyx_team/huggingface/runs/wr20u2n4) This model was trained with SFT. ### Framework versions - TRL: 0.17.0.dev0 - Transformers: 4.51.2 - Pytorch: 2.6.0 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
dimasik1987/aef1db10-27c8-43af-8b53-76962e458a20
dimasik1987
2025-05-03T17:22:03Z
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/SmolLM2-360M", "base_model:adapter:unsloth/SmolLM2-360M", "license:apache-2.0", "4-bit", "bitsandbytes", "region:us" ]
null
2025-05-03T17:19:46Z
--- library_name: peft license: apache-2.0 base_model: unsloth/SmolLM2-360M tags: - axolotl - generated_from_trainer model-index: - name: aef1db10-27c8-43af-8b53-76962e458a20 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml absolute_data_files: false adapter: lora base_model: unsloth/SmolLM2-360M bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - 9c46256a8024748c_train_data.json ds_type: json format: custom path: /workspace/input_data/9c46256a8024748c_train_data.json type: field_input: text field_instruction: instruction field_output: full_instruction format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.55 group_by_length: false hub_model_id: dimasik1987/aef1db10-27c8-43af-8b53-76962e458a20 hub_repo: null hub_strategy: end hub_token: null learning_rate: 1.0e-06 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 150 micro_batch_size: 10 mixed_precision: bf16 mlflow_experiment_name: /tmp/9c46256a8024748c_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 2048 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 62e2631f-f12b-4415-81fd-ab3b8d09115c wandb_project: s56-7 wandb_run: your_name wandb_runid: 62e2631f-f12b-4415-81fd-ab3b8d09115c warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # aef1db10-27c8-43af-8b53-76962e458a20 This model is a fine-tuned version of [unsloth/SmolLM2-360M](https://huggingface.co/unsloth/SmolLM2-360M) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.4316 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 150 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.0966 | 0.0400 | 150 | 1.4316 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
PeterL123/GLM-Z1-9B-0414
PeterL123
2025-05-03T17:05:05Z
0
0
transformers
[ "transformers", "safetensors", "glm4", "text-generation", "conversational", "zh", "en", "arxiv:2406.12793", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T17:04:11Z
--- license: mit language: - zh - en pipeline_tag: text-generation library_name: transformers --- # GLM-4-Z1-9B-0414 ## Introduction The GLM family welcomes a new generation of open-source models, the **GLM-4-32B-0414** series, featuring 32 billion parameters. Its performance is comparable to OpenAI's GPT series and DeepSeek's V3/R1 series, and it supports very user-friendly local deployment features. GLM-4-32B-Base-0414 was pre-trained on 15T of high-quality data, including a large amount of reasoning-type synthetic data, laying the foundation for subsequent reinforcement learning extensions. In the post-training stage, in addition to human preference alignment for dialogue scenarios, we also enhanced the model's performance in instruction following, engineering code, and function calling using techniques such as rejection sampling and reinforcement learning, strengthening the atomic capabilities required for agent tasks. GLM-4-32B-0414 achieves good results in areas such as engineering code, Artifact generation, function calling, search-based Q&A, and report generation. Some benchmarks even rival larger models like GPT-4o and DeepSeek-V3-0324 (671B). **GLM-Z1-32B-0414** is a reasoning model with **deep thinking capabilities**. This was developed based on GLM-4-32B-0414 through cold start and extended reinforcement learning, as well as further training of the model on tasks involving mathematics, code, and logic. Compared to the base model, GLM-Z1-32B-0414 significantly improves mathematical abilities and the capability to solve complex tasks. During the training process, we also introduced general reinforcement learning based on pairwise ranking feedback, further enhancing the model's general capabilities. **GLM-Z1-Rumination-32B-0414** is a deep reasoning model with **rumination capabilities** (benchmarked against OpenAI's Deep Research). Unlike typical deep thinking models, the rumination model employs longer periods of deep thought to solve more open-ended and complex problems (e.g., writing a comparative analysis of AI development in two cities and their future development plans). The rumination model integrates search tools during its deep thinking process to handle complex tasks and is trained by utilizing multiple rule-based rewards to guide and extend end-to-end reinforcement learning. Z1-Rumination shows significant improvements in research-style writing and complex retrieval tasks. Finally, **GLM-Z1-9B-0414** is a surprise. We employed the aforementioned series of techniques to train a 9B small-sized model that maintains the open-source tradition. Despite its smaller scale, GLM-Z1-9B-0414 still exhibits excellent capabilities in mathematical reasoning and general tasks. Its overall performance is already at a leading level among open-source models of the same size. Especially in resource-constrained scenarios, this model achieves an excellent balance between efficiency and effectiveness, providing a powerful option for users seeking lightweight deployment. ## Performance <p align="center"> <img width="100%" src="https://raw.githubusercontent.com/THUDM/GLM-4/refs/heads/main/resources/Bench-Z1-32B.png"> </p> <p align="center"> <img width="100%" src="https://raw.githubusercontent.com/THUDM/GLM-4/refs/heads/main/resources/Bench-Z1-9B.png"> </p> ## Model Usage Guidelines ### I. Sampling Parameters | Parameter | Recommended Value | Description | | ------------ | ----------------- | -------------------------------------------- | | temperature | **0.6** | Balances creativity and stability | | top_p | **0.95** | Cumulative probability threshold for sampling| | top_k | **40** | Filters out rare tokens while maintaining diversity | | max_new_tokens | **30000** | Leaves enough tokens for thinking | ### II. Enforced Thinking - Add \<think\>\n to the **first line**: Ensures the model thinks before responding - When using `chat_template.jinja`, the prompt is automatically injected to enforce this behavior ### III. Dialogue History Trimming - Retain only the **final user-visible reply**. Hidden thinking content should **not** be saved to history to reduce interference—this is already implemented in `chat_template.jinja` ### IV. Handling Long Contexts (YaRN) - When input length exceeds **8,192 tokens**, consider enabling YaRN (Rope Scaling) - In supported frameworks, add the following snippet to `config.json`: ```json "rope_scaling": { "type": "yarn", "factor": 4.0, "original_max_position_embeddings": 32768 } ``` - **Static YaRN** applies uniformly to all text. It may slightly degrade performance on short texts, so enable as needed. ## Inference Code Make Sure Using `transforemrs>=4.51.3`. ```python from transformers import AutoModelForCausalLM, AutoTokenizer MODEL_PATH = "THUDM/GLM-4-Z1-9B-0414" tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH) model = AutoModelForCausalLM.from_pretrained(MODEL_PATH, device_map="auto") message = [{"role": "user", "content": "Let a, b be positive real numbers such that ab = a + b + 3. Determine the range of possible values for a + b."}] inputs = tokenizer.apply_chat_template( message, return_tensors="pt", add_generation_prompt=True, return_dict=True, ).to(model.device) generate_kwargs = { "input_ids": inputs["input_ids"], "attention_mask": inputs["attention_mask"], "max_new_tokens": 4096, "do_sample": False, } out = model.generate(**generate_kwargs) print(tokenizer.decode(out[0][inputs["input_ids"].shape[1]:], skip_special_tokens=True)) ``` ## Citations If you find our work useful, please consider citing the following paper. ``` @misc{glm2024chatglm, title={ChatGLM: A Family of Large Language Models from GLM-130B to GLM-4 All Tools}, author={Team GLM and Aohan Zeng and Bin Xu and Bowen Wang and Chenhui Zhang and Da Yin and Diego Rojas and Guanyu Feng and Hanlin Zhao and Hanyu Lai and Hao Yu and Hongning Wang and Jiadai Sun and Jiajie Zhang and Jiale Cheng and Jiayi Gui and Jie Tang and Jing Zhang and Juanzi Li and Lei Zhao and Lindong Wu and Lucen Zhong and Mingdao Liu and Minlie Huang and Peng Zhang and Qinkai Zheng and Rui Lu and Shuaiqi Duan and Shudan Zhang and Shulin Cao and Shuxun Yang and Weng Lam Tam and Wenyi Zhao and Xiao Liu and Xiao Xia and Xiaohan Zhang and Xiaotao Gu and Xin Lv and Xinghan Liu and Xinyi Liu and Xinyue Yang and Xixuan Song and Xunkai Zhang and Yifan An and Yifan Xu and Yilin Niu and Yuantao Yang and Yueyan Li and Yushi Bai and Yuxiao Dong and Zehan Qi and Zhaoyu Wang and Zhen Yang and Zhengxiao Du and Zhenyu Hou and Zihan Wang}, year={2024}, eprint={2406.12793}, archivePrefix={arXiv}, primaryClass={id='cs.CL' full_name='Computation and Language' is_active=True alt_name='cmp-lg' in_archive='cs' is_general=False description='Covers natural language processing. Roughly includes material in ACM Subject Class I.2.7. Note that work on artificial languages (programming languages, logics, formal systems) that does not explicitly address natural-language issues broadly construed (natural-language processing, computational linguistics, speech, text retrieval, etc.) is not appropriate for this area.'} } ```
Triangle104/Llama-3.1-Nemotron-Nano-8B-v1-abliterated-Q4_K_M-GGUF
Triangle104
2025-05-03T16:56:28Z
0
0
transformers
[ "transformers", "gguf", "nvidia", "llama-3", "pytorch", "abliterated", "uncensored", "llama-cpp", "gguf-my-repo", "text-generation", "en", "base_model:huihui-ai/Llama-3.1-Nemotron-Nano-8B-v1-abliterated", "base_model:quantized:huihui-ai/Llama-3.1-Nemotron-Nano-8B-v1-abliterated", "license:other", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T16:56:01Z
--- base_model: huihui-ai/Llama-3.1-Nemotron-Nano-8B-v1-abliterated language: - en library_name: transformers license: other license_name: nvidia-open-model-license license_link: https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/ pipeline_tag: text-generation tags: - nvidia - llama-3 - pytorch - abliterated - uncensored - llama-cpp - gguf-my-repo --- # Triangle104/Llama-3.1-Nemotron-Nano-8B-v1-abliterated-Q4_K_M-GGUF This model was converted to GGUF format from [`huihui-ai/Llama-3.1-Nemotron-Nano-8B-v1-abliterated`](https://huggingface.co/huihui-ai/Llama-3.1-Nemotron-Nano-8B-v1-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/huihui-ai/Llama-3.1-Nemotron-Nano-8B-v1-abliterated) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Llama-3.1-Nemotron-Nano-8B-v1-abliterated-Q4_K_M-GGUF --hf-file llama-3.1-nemotron-nano-8b-v1-abliterated-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Llama-3.1-Nemotron-Nano-8B-v1-abliterated-Q4_K_M-GGUF --hf-file llama-3.1-nemotron-nano-8b-v1-abliterated-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Llama-3.1-Nemotron-Nano-8B-v1-abliterated-Q4_K_M-GGUF --hf-file llama-3.1-nemotron-nano-8b-v1-abliterated-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Llama-3.1-Nemotron-Nano-8B-v1-abliterated-Q4_K_M-GGUF --hf-file llama-3.1-nemotron-nano-8b-v1-abliterated-q4_k_m.gguf -c 2048 ```
adi0308/output
adi0308
2025-05-03T16:52:00Z
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:meta-llama/Llama-3.2-1B", "base_model:adapter:meta-llama/Llama-3.2-1B", "license:llama3.2", "region:us" ]
null
2025-05-03T16:51:09Z
--- library_name: peft license: llama3.2 base_model: meta-llama/Llama-3.2-1B tags: - generated_from_trainer model-index: - name: output results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # output This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.15.2 - Transformers 4.51.3 - Pytorch 2.7.0+cu118 - Datasets 3.5.1 - Tokenizers 0.21.1
acekillersg/Qwen2.5_Insurance_7B_fp16
acekillersg
2025-05-03T16:51:10Z
0
0
null
[ "safetensors", "qwen2", "finance", "insurance", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-7B-Instruct", "license:apache-2.0", "region:us" ]
null
2025-04-19T13:37:10Z
--- license: apache-2.0 base_model: - Qwen/Qwen2.5-7B-Instruct tags: - finance - insurance ---
En3rGy/getphatFLUXReality_v5Hardcore
En3rGy
2025-05-03T16:44:41Z
0
0
null
[ "base_model:black-forest-labs/FLUX.1-dev", "base_model:finetune:black-forest-labs/FLUX.1-dev", "region:us" ]
null
2025-05-03T03:18:02Z
--- base_model: - black-forest-labs/FLUX.1-dev --- # getphat FLUX Reality > *“My most versatile model to date.”* --- ### Beschreibung **getphat FLUX Reality** ist ein vielseitiges SDXL-Modell, das beeindruckende Resultate sowohl im realistischen als auch im stilisierten (Anime) Bereich liefert. Es verträgt sich extrem gut mit LoRAs, hat ein gutes Verständnis für Licht, Anatomie und Textur, und eignet sich sowohl für NSFW als auch für künstlerisch-abstrakte Inhalte. --- ### Empfohlene Parameter - **Sampling:** DPM++ 2M Karras - **Steps:** 20–30 - **CFG Scale:** 7–8 - **Auflösung:** 1024x1024 oder höher empfohlen --- ### Beispiel-Prompts ```txt photo of a realistic woman in a leather jacket, ultra detailed, cinematic lighting, sdxl anime girl with glowing eyes, cyberpunk city, dynamic angle, sdxl style A feline silhouette glides across a sunlit floor, each step deliberate and light, until a sudden leap carries her gracefully onto a softly rumpled bed, where shadows fold gently beneath her landing and dust particles dance in the disturbed light. --- Prompt-Inspiration vom Ersteller A beautiful AI generated image of a woman among cascading synesthetic gradients echoing through a feedback loop of ambient intention, where the luminal drift of pseudo-paradox waves collide with the soft geometry of inverted timelines, suspended in a post-emotive flux of chromatic resonance. The essence of spatial murmurs weaves through probabilistic shimmerfields, balanced delicately on the cusp of translucent recursion, as if the entire composition breathes in algorithmic reverie — not quite dreaming, not quite rendering, but oscillating between semi-coherent wonderbytes and metaphysical lens flare. Ein legendärer Prompt – wortwörtlich aus dem Modell-Video auf Civitai. --- Lizenz / Herkunft Originalmodell von Civitai: https://civitai.com/models/861840/getphat-flux-reality-nsfw --- Hinweis Dieses Modell ist NSFW-fähig, aber nicht darauf beschränkt. Die realistische Bildqualität ist beeindruckend – perfekt für kreative Experimente und abstrakte Eskalationen.
LarryAIDraw/touhou_kudamaki_ponyXL
LarryAIDraw
2025-05-03T16:41:08Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2025-05-03T16:09:16Z
--- license: creativeml-openrail-m --- https://civitai.com/models/371928/ponyv6-xl-tsukasa-kudamaki-or-touhou
pictgencustomer/coneyisland2-14_425
pictgencustomer
2025-05-03T16:38:24Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-05-03T16:38:21Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: coneyisland2-14_michaeluffer_6 --- # Coneyisland2 14_425 <Gallery /> Trained on Replicate using: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `coneyisland2-14_michaeluffer_6` to trigger the image generation. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('pictgencustomer/coneyisland2-14_425', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
Membersuger/Euro_21
Membersuger
2025-05-03T16:30:08Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T16:20:07Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jnjj/Vvv
jnjj
2025-05-03T16:26:47Z
0
0
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "base_model:jnjj/model_no_bias_qwen3-0.6B", "base_model:quantized:jnjj/model_no_bias_qwen3-0.6B", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-03T16:26:42Z
--- base_model: jnjj/model_no_bias_qwen3-0.6B library_name: transformers tags: - llama-cpp - gguf-my-repo --- # jnjj/model_no_bias_qwen3-0.6B-Q3_K_L-GGUF This model was converted to GGUF format from [`jnjj/model_no_bias_qwen3-0.6B`](https://huggingface.co/jnjj/model_no_bias_qwen3-0.6B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/jnjj/model_no_bias_qwen3-0.6B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo jnjj/model_no_bias_qwen3-0.6B-Q3_K_L-GGUF --hf-file model_no_bias_qwen3-0.6b-q3_k_l.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo jnjj/model_no_bias_qwen3-0.6B-Q3_K_L-GGUF --hf-file model_no_bias_qwen3-0.6b-q3_k_l.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo jnjj/model_no_bias_qwen3-0.6B-Q3_K_L-GGUF --hf-file model_no_bias_qwen3-0.6b-q3_k_l.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo jnjj/model_no_bias_qwen3-0.6B-Q3_K_L-GGUF --hf-file model_no_bias_qwen3-0.6b-q3_k_l.gguf -c 2048 ```
yuvrajAI/Shadow2-Ai
yuvrajAI
2025-05-03T16:19:52Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-03T16:19:49Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Eehan/pythia-1b-deduped-tldr-gpm-2dim-temp-6025-beta-0.04
Eehan
2025-05-03T16:19:52Z
0
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T16:17:44Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
dgambettaphd/M_llm2_gen7_WXS_doc1000_synt64_lr1e-04_acm_SYNLAST
dgambettaphd
2025-05-03T16:15:02Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-03T16:14:49Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mutaibamohsin845/cv_bot
mutaibamohsin845
2025-05-03T16:13:08Z
0
0
null
[ "base_model:deepseek-ai/DeepSeek-V3-0324", "base_model:finetune:deepseek-ai/DeepSeek-V3-0324", "license:mit", "region:us" ]
null
2025-05-03T16:11:55Z
--- license: mit base_model: - deepseek-ai/DeepSeek-V3-0324 ---
ASethi04/meta-llama-Llama-3.1-8B-pubmedqa-first-lora-4-0.0001
ASethi04
2025-05-03T16:13:01Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:meta-llama/Llama-3.1-8B", "base_model:finetune:meta-llama/Llama-3.1-8B", "endpoints_compatible", "region:us" ]
null
2025-05-03T14:29:38Z
--- base_model: meta-llama/Llama-3.1-8B library_name: transformers model_name: meta-llama-Llama-3.1-8B-pubmedqa-first-lora-4-0.0001 tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for meta-llama-Llama-3.1-8B-pubmedqa-first-lora-4-0.0001 This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="ASethi04/meta-llama-Llama-3.1-8B-pubmedqa-first-lora-4-0.0001", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/torchql-org/huggingface/runs/tzwksac9) This model was trained with SFT. ### Framework versions - TRL: 0.16.1 - Transformers: 4.51.2 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
macpaw-research/tst_16bit-mlx-q
macpaw-research
2025-05-03T16:09:46Z
0
0
mlx
[ "mlx", "safetensors", "llama", "text-generation", "conversational", "base_model:macpaw-research/tst_16bit", "base_model:quantized:macpaw-research/tst_16bit", "license:apache-2.0", "4-bit", "region:us" ]
text-generation
2025-05-03T16:08:45Z
--- license: apache-2.0 pipeline_tag: text-generation base_model: macpaw-research/tst_16bit tags: - mlx library_name: mlx --- # macpaw-research/tst_16bit-mlx-q This model [macpaw-research/tst_16bit-mlx-q](https://huggingface.co/macpaw-research/tst_16bit-mlx-q) was converted to MLX format from [macpaw-research/tst_16bit](https://huggingface.co/macpaw-research/tst_16bit) using mlx-lm version **0.23.2**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("macpaw-research/tst_16bit-mlx-q") prompt = "hello" if tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```
macpaw-research/tst_16bit-mlx
macpaw-research
2025-05-03T16:07:06Z
0
0
mlx
[ "mlx", "safetensors", "llama", "text-generation", "conversational", "base_model:macpaw-research/tst_16bit", "base_model:finetune:macpaw-research/tst_16bit", "license:apache-2.0", "region:us" ]
text-generation
2025-05-03T16:03:41Z
--- license: apache-2.0 base_model: macpaw-research/tst_16bit tags: - mlx library_name: mlx pipeline_tag: text-generation --- # macpaw-research/tst_16bit-mlx This model [macpaw-research/tst_16bit-mlx](https://huggingface.co/macpaw-research/tst_16bit-mlx) was converted to MLX format from [macpaw-research/tst_16bit](https://huggingface.co/macpaw-research/tst_16bit) using mlx-lm version **0.23.2**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("macpaw-research/tst_16bit-mlx") prompt = "hello" if tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```
jdchang/full-with-label-bs-1024-sg-2-step-972
jdchang
2025-05-03T16:02:59Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
null
2025-05-03T16:02:42Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Romain-XV/8f3edba3-d660-4f05-bc84-9befdb0b2deb
Romain-XV
2025-05-03T15:59:04Z
0
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "mistral", "text-generation", "generated_from_trainer", "axolotl", "dpo", "trl", "unsloth", "conversational", "arxiv:2305.18290", "base_model:unsloth/Phi-3-mini-4k-instruct", "base_model:finetune:unsloth/Phi-3-mini-4k-instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T15:09:42Z
--- base_model: unsloth/Phi-3-mini-4k-instruct library_name: transformers model_name: 8f3edba3-d660-4f05-bc84-9befdb0b2deb tags: - generated_from_trainer - axolotl - dpo - trl - unsloth licence: license --- # Model Card for 8f3edba3-d660-4f05-bc84-9befdb0b2deb This model is a fine-tuned version of [unsloth/Phi-3-mini-4k-instruct](https://huggingface.co/unsloth/Phi-3-mini-4k-instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Romain-XV/8f3edba3-d660-4f05-bc84-9befdb0b2deb", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/romain_fnc-xventures/Gradients-On-Demand/runs/8bqxlpyz) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.12.0.dev0 - Transformers: 4.46.0 - Pytorch: 2.5.0+cu124 - Datasets: 3.0.1 - Tokenizers: 0.20.1 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
maksf8486/35d165a7-7403-460a-8417-8042999ad7d6
maksf8486
2025-05-03T15:58:32Z
0
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "mistral", "text-generation", "generated_from_trainer", "axolotl", "dpo", "trl", "unsloth", "conversational", "arxiv:2305.18290", "base_model:unsloth/Phi-3-mini-4k-instruct", "base_model:quantized:unsloth/Phi-3-mini-4k-instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "bitsandbytes", "region:us" ]
text-generation
2025-05-03T15:13:48Z
--- base_model: unsloth/Phi-3-mini-4k-instruct library_name: transformers model_name: 35d165a7-7403-460a-8417-8042999ad7d6 tags: - generated_from_trainer - axolotl - dpo - trl - unsloth licence: license --- # Model Card for 35d165a7-7403-460a-8417-8042999ad7d6 This model is a fine-tuned version of [unsloth/Phi-3-mini-4k-instruct](https://huggingface.co/unsloth/Phi-3-mini-4k-instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="maksf8486/35d165a7-7403-460a-8417-8042999ad7d6", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dedok-yo/s56-2/runs/d0og4pbv) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.12.0.dev0 - Transformers: 4.46.0 - Pytorch: 2.5.0+cu124 - Datasets: 3.0.1 - Tokenizers: 0.20.1 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
raregemmia/mia
raregemmia
2025-05-03T15:57:23Z
0
0
null
[ "license:bigcode-openrail-m", "region:us" ]
null
2025-05-03T15:57:23Z
--- license: bigcode-openrail-m ---
TOMFORD79/Fly43
TOMFORD79
2025-05-03T15:56:22Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-05-03T15:15:28Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
piotr-ai/polanka_4b_v0.1_qwen3_250502_gguf
piotr-ai
2025-05-03T15:52:25Z
0
0
null
[ "gguf", "language", "text-generation", "pl", "en", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-05-02T07:35:13Z
--- license: apache-2.0 language: - pl - en pipeline_tag: text-generation tags: - language --- temp: 0.6 top-k: 40 **example 1:** ``` <|im_start|>system Poniżej znajduje się dialog, w którym użytkownik rozmawia z asystentką o imieniu Polanka. Polanka jest pomocna i uprzejma.<|im_end|> > co to jest psychologia? Psychologia to dziedzina nauki, która zajmuje się badaniem ludzkiego umysłu, zachowań, emocji, motywacji i procesów poznawczych. Psychologia bada, jak ludzie myślą, uczą się, doświadczają i reagują na swoje otoczenie. Jest to dyscyplina, która łączy w sobie elementy z różnych dziedzin, takich jak: 1. **Biologia**: Badanie biologicznych podstaw ludzkiego zachowania i poznawania. 2. **Neurobiologia**: Badanie struktury i funkcji mózgu oraz ich wpływu na zachowanie. 3. **Psychologia behawioralna**: Badanie zachowań poprzez obserwację i interwencję. 4. **Psychologia poznawcza**: Badanie procesów myślowych, takich jak pamięć, uczenie się, rozwiązywanie problemów. 5. **Psychologia społeczna**: Badanie wpływu społeczeństwa na zachowanie i postrzeganie siebie. 6. **Psychologia rozwoju**: Badanie rozwoju osobowości, poznawczego i emocjonalnego w czasie. 7. **Psychologia kliniczna**: Badanie i leczenie zaburzeń psychicznych oraz problemów emocjonalnych. 8. **Psychologia kulturowa**: Badanie wpływu kultury na ludzkie zachowanie i postrzeganie siebie. 9. **Psychologia etyczna**: Badanie etycznych aspektów psychologii i jej zastosowań. Psychologia ma kluczowe znaczenie w rozumieniu ludzkich zachowań, motywacji i emocji, a także w rozwoju terapii i leczenia zaburzeń psychicznych. ``` **example 2:** wyjaśnij krok po kroku ile to jest 12 / 3 + 2 ``` Oczywiście, wyjaśnię ci to krok po kroku: 1. Rozpocznijmy od pierwszego działania: 12 / 3. - 12 podzielone przez 3 to 4, ponieważ 3 razy 4 to 12. 2. Teraz dodajemy do wyniku 4 liczbę 2. - 4 + 2 = 6. Więc 12 / 3 + 2 = 6. ``` **example 3:** napisz funkcję w pythonie, która mnoży liczbę razy 3 ```` Oto funkcja w Pythonie, która mnoży liczbę przez 3: ```python def mnozenie_liczby_3(liczba): """ Funkcja mnoży liczbę przez 3. Parameters: liczba (int): Liczba, którą chcemy pomnożyć. Returns: int: Wynik mnożenia liczby przez 3. """ wynik = liczba * 3 return wynik # Przykłady użycia: liczba = 5 wynik = mnozenie_liczby_3(liczba) print(f"Wynik mnożenia 5 przez 3 to: {wynik}") liczba = 0 wynik = mnozenie_liczby_3(liczba) print(f"Wynik mnożenia 0 przez 3 to: {wynik}") ``` W tej funkcji `liczba` to parametr, który otrzymujemy podczas wywołania funkcji. Funkcja zwraca wynik mnożenia liczby przez 3. W przykładach użycia możemy przypisać do zmiennej liczbę, którą chcemy pomnożyć, a następnie wywołać funkcję z tą liczbą jako argumentem i wyświetlić wynik. ```` **example 4:** przetłumacz na język angielski: "Liczba mórz na Oceanie Spokojnym w stosunku do jego powierzchni nie jest zbyt wielka, ponieważ większa część jego obszaru to po prostu ocean, bez miejsc, w których można wyznaczyć morza." ``` "Number of seas in the Pacific Ocean relative to its surface isn't too great because the majority of its area is just ocean, with no places where you can identify a sea." ```
memeviss/zombieX_6
memeviss
2025-05-03T15:43:39Z
0
0
null
[ "safetensors", "region:us" ]
null
2025-05-03T15:40:27Z
# Optimized TTS Model This model has been optimized for 100% TOP1 performance using advanced parameter enhancement techniques. ## Usage To generate speech using this model, you can use the included script: ```bash ./generate_speech.py --text "Your text here" --output_path output.wav ``` For more details, see the optimization report in this directory.
Docty/dreambooth-dog-lora
Docty
2025-05-03T15:42:38Z
0
0
diffusers
[ "diffusers", "tensorboard", "text-to-image", "lora", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers", "base_model:stable-diffusion-v1-5/stable-diffusion-v1-5", "base_model:adapter:stable-diffusion-v1-5/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2025-05-03T15:17:56Z
--- base_model: stable-diffusion-v1-5/stable-diffusion-v1-5 library_name: diffusers license: creativeml-openrail-m inference: true instance_prompt: A photo of sks dog tags: - text-to-image - diffusers - lora - diffusers-training - stable-diffusion - stable-diffusion-diffusers --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # LoRA DreamBooth - Docty/dreambooth-dog-lora These are LoRA adaption weights for stable-diffusion-v1-5/stable-diffusion-v1-5. The weights were trained on A photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png) LoRA for the text encoder was enabled: False. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
phospho-app/lego-pickup-sk6kb5mhh6
phospho-app
2025-05-03T15:40:44Z
0
0
null
[ "safetensors", "phosphobot", "act", "region:us" ]
null
2025-05-03T14:32:32Z
--- tags: - phosphobot - act task_categories: - robotics --- # act Model - phospho Training Pipeline ## This model was trained using **phospho**. Training was successfull, try it out on your robot! ## Training parameters: - **Dataset**: [LegrandFrederic/lego-pickup-mono-setup](https://huggingface.co/datasets/LegrandFrederic/lego-pickup-mono-setup) - **Wandb run URL**: None - **Epochs**: None - **Batch size**: 100 - **Training steps**: 8000 📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=replicate_groot_training_pipeline) 🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=replicate_groot_training_pipeline)
Bonnief/finetune-afriberta-small-am
Bonnief
2025-05-03T15:38:15Z
0
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "fill-mask", "generated_from_trainer", "base_model:castorini/afriberta_small", "base_model:finetune:castorini/afriberta_small", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2025-05-03T13:06:00Z
--- library_name: transformers base_model: castorini/afriberta_small tags: - generated_from_trainer model-index: - name: finetune-afriberta-small-am results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetune-afriberta-small-am This model is a fine-tuned version of [castorini/afriberta_small](https://huggingface.co/castorini/afriberta_small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.5525 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 999 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - training_steps: 50000 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.1 - Tokenizers 0.21.1
lmganon123/THUDM_GLM-Z1-32B-0414-exl3-4.0bpw
lmganon123
2025-05-03T15:35:15Z
0
0
null
[ "safetensors", "glm4", "base_model:THUDM/GLM-Z1-32B-0414", "base_model:quantized:THUDM/GLM-Z1-32B-0414", "4-bit", "exl3", "region:us" ]
null
2025-05-03T14:19:10Z
--- base_model: - THUDM/GLM-Z1-32B-0414 --- I wish more people would make exl3 quants. I probably will be making some for 24GB VRAM. ## Prompt format ``` [gMASK]<sop><|system|> {system_prompt}<|user|> {prompt}<|assistant|> <think> ```
RichardErkhov/myselfrew_-_llama31_n50_self_rewarding_bz31_lr2e6_2ep-gguf
RichardErkhov
2025-05-03T15:31:06Z
0
0
null
[ "gguf", "arxiv:1910.09700", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-03T12:54:36Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) llama31_n50_self_rewarding_bz31_lr2e6_2ep - GGUF - Model creator: https://huggingface.co/myselfrew/ - Original model: https://huggingface.co/myselfrew/llama31_n50_self_rewarding_bz31_lr2e6_2ep/ | Name | Quant method | Size | | ---- | ---- | ---- | | [llama31_n50_self_rewarding_bz31_lr2e6_2ep.Q2_K.gguf](https://huggingface.co/RichardErkhov/myselfrew_-_llama31_n50_self_rewarding_bz31_lr2e6_2ep-gguf/blob/main/llama31_n50_self_rewarding_bz31_lr2e6_2ep.Q2_K.gguf) | Q2_K | 2.96GB | | [llama31_n50_self_rewarding_bz31_lr2e6_2ep.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/myselfrew_-_llama31_n50_self_rewarding_bz31_lr2e6_2ep-gguf/blob/main/llama31_n50_self_rewarding_bz31_lr2e6_2ep.IQ3_XS.gguf) | IQ3_XS | 3.28GB | | [llama31_n50_self_rewarding_bz31_lr2e6_2ep.IQ3_S.gguf](https://huggingface.co/RichardErkhov/myselfrew_-_llama31_n50_self_rewarding_bz31_lr2e6_2ep-gguf/blob/main/llama31_n50_self_rewarding_bz31_lr2e6_2ep.IQ3_S.gguf) | IQ3_S | 3.43GB | | [llama31_n50_self_rewarding_bz31_lr2e6_2ep.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/myselfrew_-_llama31_n50_self_rewarding_bz31_lr2e6_2ep-gguf/blob/main/llama31_n50_self_rewarding_bz31_lr2e6_2ep.Q3_K_S.gguf) | Q3_K_S | 3.41GB | | [llama31_n50_self_rewarding_bz31_lr2e6_2ep.IQ3_M.gguf](https://huggingface.co/RichardErkhov/myselfrew_-_llama31_n50_self_rewarding_bz31_lr2e6_2ep-gguf/blob/main/llama31_n50_self_rewarding_bz31_lr2e6_2ep.IQ3_M.gguf) | IQ3_M | 3.52GB | | [llama31_n50_self_rewarding_bz31_lr2e6_2ep.Q3_K.gguf](https://huggingface.co/RichardErkhov/myselfrew_-_llama31_n50_self_rewarding_bz31_lr2e6_2ep-gguf/blob/main/llama31_n50_self_rewarding_bz31_lr2e6_2ep.Q3_K.gguf) | Q3_K | 3.74GB | | [llama31_n50_self_rewarding_bz31_lr2e6_2ep.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/myselfrew_-_llama31_n50_self_rewarding_bz31_lr2e6_2ep-gguf/blob/main/llama31_n50_self_rewarding_bz31_lr2e6_2ep.Q3_K_M.gguf) | Q3_K_M | 3.74GB | | [llama31_n50_self_rewarding_bz31_lr2e6_2ep.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/myselfrew_-_llama31_n50_self_rewarding_bz31_lr2e6_2ep-gguf/blob/main/llama31_n50_self_rewarding_bz31_lr2e6_2ep.Q3_K_L.gguf) | Q3_K_L | 4.03GB | | [llama31_n50_self_rewarding_bz31_lr2e6_2ep.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/myselfrew_-_llama31_n50_self_rewarding_bz31_lr2e6_2ep-gguf/blob/main/llama31_n50_self_rewarding_bz31_lr2e6_2ep.IQ4_XS.gguf) | IQ4_XS | 4.18GB | | [llama31_n50_self_rewarding_bz31_lr2e6_2ep.Q4_0.gguf](https://huggingface.co/RichardErkhov/myselfrew_-_llama31_n50_self_rewarding_bz31_lr2e6_2ep-gguf/blob/main/llama31_n50_self_rewarding_bz31_lr2e6_2ep.Q4_0.gguf) | Q4_0 | 4.34GB | | [llama31_n50_self_rewarding_bz31_lr2e6_2ep.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/myselfrew_-_llama31_n50_self_rewarding_bz31_lr2e6_2ep-gguf/blob/main/llama31_n50_self_rewarding_bz31_lr2e6_2ep.IQ4_NL.gguf) | IQ4_NL | 4.38GB | | [llama31_n50_self_rewarding_bz31_lr2e6_2ep.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/myselfrew_-_llama31_n50_self_rewarding_bz31_lr2e6_2ep-gguf/blob/main/llama31_n50_self_rewarding_bz31_lr2e6_2ep.Q4_K_S.gguf) | Q4_K_S | 4.37GB | | [llama31_n50_self_rewarding_bz31_lr2e6_2ep.Q4_K.gguf](https://huggingface.co/RichardErkhov/myselfrew_-_llama31_n50_self_rewarding_bz31_lr2e6_2ep-gguf/blob/main/llama31_n50_self_rewarding_bz31_lr2e6_2ep.Q4_K.gguf) | Q4_K | 4.58GB | | [llama31_n50_self_rewarding_bz31_lr2e6_2ep.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/myselfrew_-_llama31_n50_self_rewarding_bz31_lr2e6_2ep-gguf/blob/main/llama31_n50_self_rewarding_bz31_lr2e6_2ep.Q4_K_M.gguf) | Q4_K_M | 4.58GB | | [llama31_n50_self_rewarding_bz31_lr2e6_2ep.Q4_1.gguf](https://huggingface.co/RichardErkhov/myselfrew_-_llama31_n50_self_rewarding_bz31_lr2e6_2ep-gguf/blob/main/llama31_n50_self_rewarding_bz31_lr2e6_2ep.Q4_1.gguf) | Q4_1 | 4.78GB | | [llama31_n50_self_rewarding_bz31_lr2e6_2ep.Q5_0.gguf](https://huggingface.co/RichardErkhov/myselfrew_-_llama31_n50_self_rewarding_bz31_lr2e6_2ep-gguf/blob/main/llama31_n50_self_rewarding_bz31_lr2e6_2ep.Q5_0.gguf) | Q5_0 | 5.21GB | | [llama31_n50_self_rewarding_bz31_lr2e6_2ep.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/myselfrew_-_llama31_n50_self_rewarding_bz31_lr2e6_2ep-gguf/blob/main/llama31_n50_self_rewarding_bz31_lr2e6_2ep.Q5_K_S.gguf) | Q5_K_S | 5.21GB | | [llama31_n50_self_rewarding_bz31_lr2e6_2ep.Q5_K.gguf](https://huggingface.co/RichardErkhov/myselfrew_-_llama31_n50_self_rewarding_bz31_lr2e6_2ep-gguf/blob/main/llama31_n50_self_rewarding_bz31_lr2e6_2ep.Q5_K.gguf) | Q5_K | 5.34GB | | [llama31_n50_self_rewarding_bz31_lr2e6_2ep.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/myselfrew_-_llama31_n50_self_rewarding_bz31_lr2e6_2ep-gguf/blob/main/llama31_n50_self_rewarding_bz31_lr2e6_2ep.Q5_K_M.gguf) | Q5_K_M | 5.34GB | | [llama31_n50_self_rewarding_bz31_lr2e6_2ep.Q5_1.gguf](https://huggingface.co/RichardErkhov/myselfrew_-_llama31_n50_self_rewarding_bz31_lr2e6_2ep-gguf/blob/main/llama31_n50_self_rewarding_bz31_lr2e6_2ep.Q5_1.gguf) | Q5_1 | 5.65GB | | [llama31_n50_self_rewarding_bz31_lr2e6_2ep.Q6_K.gguf](https://huggingface.co/RichardErkhov/myselfrew_-_llama31_n50_self_rewarding_bz31_lr2e6_2ep-gguf/blob/main/llama31_n50_self_rewarding_bz31_lr2e6_2ep.Q6_K.gguf) | Q6_K | 6.14GB | | [llama31_n50_self_rewarding_bz31_lr2e6_2ep.Q8_0.gguf](https://huggingface.co/RichardErkhov/myselfrew_-_llama31_n50_self_rewarding_bz31_lr2e6_2ep-gguf/blob/main/llama31_n50_self_rewarding_bz31_lr2e6_2ep.Q8_0.gguf) | Q8_0 | 7.95GB | Original model description: --- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sweet-sour-pig/thro
sweet-sour-pig
2025-05-03T15:30:32Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-classification", "generated_from_trainer", "base_model:meta-llama/Llama-3.2-1B", "base_model:finetune:meta-llama/Llama-3.2-1B", "license:llama3.2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2025-05-03T10:22:51Z
--- library_name: transformers license: llama3.2 base_model: meta-llama/Llama-3.2-1B tags: - generated_from_trainer metrics: - accuracy - f1 - recall model-index: - name: thro results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # thro This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0000 - Accuracy: 1.0 - F1: 1.0 - Recall: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---:|:------:| | 0.0 | 1.0 | 125 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0 | 2.0 | 250 | 0.0000 | 1.0 | 1.0 | 1.0 | ### Framework versions - Transformers 4.47.0 - Pytorch 2.5.1+cu121 - Datasets 3.3.1 - Tokenizers 0.21.0
YkyR/whisper-small-dv
YkyR
2025-05-03T15:30:30Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "tt", "dataset:mozilla-foundation/common_voice_13_0", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-05-03T13:16:54Z
--- library_name: transformers language: - tt license: apache-2.0 base_model: openai/whisper-small tags: - generated_from_trainer datasets: - mozilla-foundation/common_voice_13_0 metrics: - wer model-index: - name: Whisper Small tt - Uku Renek Kronbergs results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 13 type: mozilla-foundation/common_voice_13_0 config: tt split: test args: tt metrics: - name: Wer type: wer value: 38.6458078709362 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small tt - Uku Renek Kronbergs This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 13 dataset. It achieves the following results on the evaluation set: - Loss: 0.3486 - Wer Ortho: 45.5504 - Wer: 38.6458 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: constant_with_warmup - lr_scheduler_warmup_steps: 50 - training_steps: 500 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer | |:-------------:|:------:|:----:|:---------------:|:---------:|:-------:| | 0.2153 | 0.6219 | 500 | 0.3486 | 45.5504 | 38.6458 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.1 - Tokenizers 0.21.1
OOOss/ru-ner-model
OOOss
2025-05-03T15:29:51Z
0
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "token-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2025-05-03T15:27:32Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
aisyhmaira/llama-3.2-ko-finetune-2
aisyhmaira
2025-05-03T15:25:32Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/Llama-3.2-1B-unsloth-bnb-4bit", "base_model:finetune:unsloth/Llama-3.2-1B-unsloth-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-05-03T15:20:34Z
--- base_model: unsloth/Llama-3.2-1B-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** aisyhmaira - **License:** apache-2.0 - **Finetuned from model :** unsloth/Llama-3.2-1B-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
phospho-app/anthonyav-so100-lego-v3-caopgh0iqe
phospho-app
2025-05-03T15:22:21Z
0
0
null
[ "safetensors", "gr00t_n1", "phosphobot", "gr00t", "region:us" ]
null
2025-05-03T14:46:37Z
--- tags: - phosphobot - gr00t task_categories: - robotics --- # gr00t Model - phospho Training Pipeline ## This model was trained using **phospho**. Training was successfull, try it out on your robot! ## Training parameters: - **Dataset**: [anthonyav/so100-lego-v3](https://huggingface.co/datasets/anthonyav/so100-lego-v3) - **Wandb run URL**: None - **Epochs**: 10 - **Batch size**: 64 - **Training steps**: None 📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=replicate_groot_training_pipeline) 🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=replicate_groot_training_pipeline)
KriptoUzmani/Qwen2.5-32B-Instruct-bnb-4bit-Gensyn-Swarm-squeaky_trotting_komodo
KriptoUzmani
2025-05-03T15:14:33Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am squeaky trotting komodo", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-32B-Instruct-bnb-4bit", "base_model:finetune:Gensyn/Qwen2.5-32B-Instruct-bnb-4bit", "endpoints_compatible", "region:us" ]
null
2025-05-01T05:26:54Z
--- base_model: Gensyn/Qwen2.5-32B-Instruct-bnb-4bit library_name: transformers model_name: Qwen2.5-32B-Instruct-bnb-4bit-Gensyn-Swarm-squeaky_trotting_komodo tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am squeaky trotting komodo - unsloth - trl licence: license --- # Model Card for Qwen2.5-32B-Instruct-bnb-4bit-Gensyn-Swarm-squeaky_trotting_komodo This model is a fine-tuned version of [Gensyn/Qwen2.5-32B-Instruct-bnb-4bit](https://huggingface.co/Gensyn/Qwen2.5-32B-Instruct-bnb-4bit). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="KriptoUzmani/Qwen2.5-32B-Instruct-bnb-4bit-Gensyn-Swarm-squeaky_trotting_komodo", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
dzanbek/31f40443-6a03-4462-8652-7009c6c1403e
dzanbek
2025-05-03T15:08:42Z
0
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:unsloth/Mistral-Nemo-Base-2407", "base_model:adapter:unsloth/Mistral-Nemo-Base-2407", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-05-03T14:44:35Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Mistral-Nemo-Base-2407 tags: - axolotl - generated_from_trainer model-index: - name: 31f40443-6a03-4462-8652-7009c6c1403e results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml absolute_data_files: false adapter: lora base_model: unsloth/Mistral-Nemo-Base-2407 bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - 7f381c5f243a3b63_train_data.json ds_type: json format: custom path: /workspace/input_data/7f381c5f243a3b63_train_data.json type: field_input: critic_prompt field_instruction: prompt field_output: init_response format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.5 group_by_length: false hub_model_id: dzanbek/31f40443-6a03-4462-8652-7009c6c1403e hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-06 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/7f381c5f243a3b63_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 1031246b-4c91-4dd7-b3f7-0b6440762bad wandb_project: s56-2 wandb_run: your_name wandb_runid: 1031246b-4c91-4dd7-b3f7-0b6440762bad warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 31f40443-6a03-4462-8652-7009c6c1403e This model is a fine-tuned version of [unsloth/Mistral-Nemo-Base-2407](https://huggingface.co/unsloth/Mistral-Nemo-Base-2407) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6457 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.7042 | 0.0384 | 200 | 0.6457 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
eRogueca/distilbert-rotten-tomatoes
eRogueca
2025-05-03T15:07:29Z
0
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-05-03T15:05:57Z
--- library_name: transformers license: apache-2.0 base_model: distilbert/distilbert-base-uncased tags: - generated_from_trainer model-index: - name: distilbert-rotten-tomatoes results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-rotten-tomatoes This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.51.3 - Pytorch 2.8.0.dev20250503+cu128 - Datasets 3.5.1 - Tokenizers 0.21.1
Shengkun/DarwinLM-4.6B-Llama3.1-8B-Pruned-Masked
Shengkun
2025-05-03T15:07:11Z
157
0
null
[ "safetensors", "llama", "license:apache-2.0", "region:us" ]
null
2025-04-13T06:26:56Z
--- license: apache-2.0 --- This is DarwinLM pruned from Llama3.1-8B. The model is masked that the pruned weights are set as 0 while the remaining weights are the same as the original model. The shape of all weights are the same as the original model. ```python # To use the model from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("Shengkun/DarwinLM-4.6B-Llama3.1-8B-Pruned-Masked") ``` **4.6B** | Model | Method | Param. | SciQ | PIQA | WG | ArcE | ArcC | HS | LogiQA | BoolQ | MMLU | Avg | |-----------------|------------------------|--------|------|------|------|------|------|------|--------|-------|------|------| | **Llama-3.1-8B** | **Dense** | 8B | 96.3 | 81.2 | 74.3 | 81.4 | 58.2 | 81.7 | 31.1 | 84.0 | 65.2 | 72.8 | | | **Uniform** | 4.5B | 29.1 | 53.6 | 51.7 | 26.0 | 23.6 | 27.1 | 25.5 | 62.1 | 25.7 | 36.1 | | | **ZipLM** | 6B | 65.5 | 60.6 | 56.0 | 40.2 | 34.4 | 34.4 | 28.1 | 63.0 | 27.9 | 45.7 | | | *DarwinLM (one-shot)* | 4.6B | 84.9 | 69.4 | 57.3 | 59.6 | 34.2 | 44.6 | 24.1 | 62.2 | 28.5 | 51.6 | | | **OLMO (2.5T)** | 7B | 92.8 | 79.4 | 70.4 | 73.3 | 44.9 | 77.1 | 27.9 | 72.5 | 28.3 | 62.9 | | | *DarwinLM (10.0B)* | 4.6B | 93.2 | 74.8 | 67.4 | 73.2 | 51.6 | 71.3 | 30.7 | 71.1 | 40.6 | 63.7 |
lisabdunlap/Llama-3.1-8B-Instruct-unsloth-bnb-4bit-r32-e10-lr0.0002-mixed-actors_reviews_markdown-new
lisabdunlap
2025-05-03T14:50:39Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "base_model:unsloth/Llama-3.1-8B-Instruct-unsloth-bnb-4bit", "base_model:finetune:unsloth/Llama-3.1-8B-Instruct-unsloth-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T14:48:24Z
--- base_model: unsloth/Llama-3.1-8B-Instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** lisabdunlap - **License:** apache-2.0 - **Finetuned from model :** unsloth/Llama-3.1-8B-Instruct-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
erennaltin/qwen_with_unsloth
erennaltin
2025-05-03T14:49:40Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen3", "trl", "en", "base_model:unsloth/Qwen3-8B-unsloth-bnb-4bit", "base_model:finetune:unsloth/Qwen3-8B-unsloth-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-05-03T14:49:18Z
--- base_model: unsloth/Qwen3-8B-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen3 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** erennaltin - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen3-8B-unsloth-bnb-4bit This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
spacematt/gemma-3-12b-it-qat-q4_0-unquantized-Q5_K_M-GGUF
spacematt
2025-05-03T14:48:43Z
46
0
transformers
[ "transformers", "gguf", "gemma3", "gemma", "google", "llama-cpp", "gguf-my-repo", "image-text-to-text", "base_model:google/gemma-3-12b-it-qat-q4_0-unquantized", "base_model:quantized:google/gemma-3-12b-it-qat-q4_0-unquantized", "license:gemma", "endpoints_compatible", "region:us", "conversational" ]
image-text-to-text
2025-04-21T15:22:43Z
--- base_model: google/gemma-3-12b-it-qat-q4_0-unquantized library_name: transformers license: gemma pipeline_tag: image-text-to-text tags: - gemma3 - gemma - google - llama-cpp - gguf-my-repo extra_gated_heading: Access Gemma on Hugging Face extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging Face and click below. Requests are processed immediately. extra_gated_button_content: Acknowledge license --- # spacematt/gemma-3-12b-it-qat-q4_0-unquantized-Q5_K_M-GGUF This model was converted to GGUF format from [`google/gemma-3-12b-it-qat-q4_0-unquantized`](https://huggingface.co/google/gemma-3-12b-it-qat-q4_0-unquantized) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/google/gemma-3-12b-it-qat-q4_0-unquantized) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo spacematt/gemma-3-12b-it-qat-q4_0-unquantized-Q5_K_M-GGUF --hf-file gemma-3-12b-it-qat-q4_0-unquantized-q5_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo spacematt/gemma-3-12b-it-qat-q4_0-unquantized-Q5_K_M-GGUF --hf-file gemma-3-12b-it-qat-q4_0-unquantized-q5_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo spacematt/gemma-3-12b-it-qat-q4_0-unquantized-Q5_K_M-GGUF --hf-file gemma-3-12b-it-qat-q4_0-unquantized-q5_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo spacematt/gemma-3-12b-it-qat-q4_0-unquantized-Q5_K_M-GGUF --hf-file gemma-3-12b-it-qat-q4_0-unquantized-q5_k_m.gguf -c 2048 ```
thastain/ADRv2025
thastain
2025-05-03T14:44:08Z
5
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-04-30T21:28:46Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
dgambettaphd/M_llm2_gen6_WXS_doc1000_synt64_lr1e-04_acm_SYNLAST
dgambettaphd
2025-05-03T14:41:32Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-03T14:41:20Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mansoorhamidzadeh/qwen3-0.6b-entity-attr-basalam
mansoorhamidzadeh
2025-05-03T14:32:58Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-03T14:30:29Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ASethi04/meta-llama-Llama-3.1-8B-opc-sft-first-lora-4-0.0001
ASethi04
2025-05-03T14:32:00Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:meta-llama/Llama-3.1-8B", "base_model:finetune:meta-llama/Llama-3.1-8B", "endpoints_compatible", "region:us" ]
null
2025-05-03T13:23:26Z
--- base_model: meta-llama/Llama-3.1-8B library_name: transformers model_name: meta-llama-Llama-3.1-8B-opc-sft-first-lora-4-0.0001 tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for meta-llama-Llama-3.1-8B-opc-sft-first-lora-4-0.0001 This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="ASethi04/meta-llama-Llama-3.1-8B-opc-sft-first-lora-4-0.0001", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/torchql-org/huggingface/runs/0eo5c0tu) This model was trained with SFT. ### Framework versions - TRL: 0.16.1 - Transformers: 4.51.2 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
MrRobotoAI/A24
MrRobotoAI
2025-05-03T14:29:28Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2212.04089", "base_model:MrRobotoAI/A2", "base_model:merge:MrRobotoAI/A2", "base_model:MrRobotoAI/A6", "base_model:merge:MrRobotoAI/A6", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T14:26:32Z
--- base_model: - MrRobotoAI/A6 - MrRobotoAI/A2 library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Task Arithmetic](https://arxiv.org/abs/2212.04089) merge method using [MrRobotoAI/A2](https://huggingface.co/MrRobotoAI/A2) as a base. ### Models Merged The following models were included in the merge: * [MrRobotoAI/A6](https://huggingface.co/MrRobotoAI/A6) ### Configuration The following YAML configuration was used to produce this model: ```yaml merge_method: task_arithmetic models: - model: MrRobotoAI/A2 parameters: weight: - filter: v_proj value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8] - filter: o_proj value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8] - filter: up_proj value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8] - filter: gate_proj value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8] - filter: down_proj value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8] - value: 1 - model: MrRobotoAI/A6 parameters: weight: - filter: v_proj value: [0.2, 0.2, 0.5, 0.4, 0.3, 0.2, 0.3, 0.4, 0.5, 0.2, 0.2] - filter: o_proj value: [0.2, 0.2, 0.5, 0.4, 0.3, 0.2, 0.3, 0.4, 0.5, 0.2, 0.2] - filter: up_proj value: [0.2, 0.2, 0.5, 0.4, 0.3, 0.2, 0.3, 0.4, 0.5, 0.2, 0.2] - filter: gate_proj value: [0.2, 0.2, 0.5, 0.4, 0.3, 0.2, 0.3, 0.4, 0.5, 0.2, 0.2] - filter: down_proj value: [0.2, 0.2, 0.5, 0.4, 0.3, 0.2, 0.3, 0.4, 0.5, 0.2, 0.2] - value: 0 base_model: MrRobotoAI/A2 dtype: bfloat16 ```
ASethi04/meta-llama-Llama-3.1-8B-pubmedqa-first-lora-4-4e-05
ASethi04
2025-05-03T14:29:15Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:meta-llama/Llama-3.1-8B", "base_model:finetune:meta-llama/Llama-3.1-8B", "endpoints_compatible", "region:us" ]
null
2025-05-03T12:45:43Z
--- base_model: meta-llama/Llama-3.1-8B library_name: transformers model_name: meta-llama-Llama-3.1-8B-pubmedqa-first-lora-4-4e-05 tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for meta-llama-Llama-3.1-8B-pubmedqa-first-lora-4-4e-05 This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="ASethi04/meta-llama-Llama-3.1-8B-pubmedqa-first-lora-4-4e-05", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/torchql-org/huggingface/runs/xh632c7w) This model was trained with SFT. ### Framework versions - TRL: 0.16.1 - Transformers: 4.51.2 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
wellepatts7t/sdfvdfv
wellepatts7t
2025-05-03T14:28:32Z
0
0
null
[ "license:bsd-3-clause-clear", "region:us" ]
null
2025-05-03T14:28:32Z
--- license: bsd-3-clause-clear ---
cnfusion/DeepSeek-Prover-V2-7B-mlx-8Bit
cnfusion
2025-05-03T14:19:41Z
0
0
mlx
[ "mlx", "safetensors", "llama", "base_model:deepseek-ai/DeepSeek-Prover-V2-7B", "base_model:quantized:deepseek-ai/DeepSeek-Prover-V2-7B", "8-bit", "region:us" ]
null
2025-05-03T14:19:14Z
--- base_model: deepseek-ai/DeepSeek-Prover-V2-7B tags: - mlx --- # cnfusion/DeepSeek-Prover-V2-7B-mlx-8Bit The Model [cnfusion/DeepSeek-Prover-V2-7B-mlx-8Bit](https://huggingface.co/cnfusion/DeepSeek-Prover-V2-7B-mlx-8Bit) was converted to MLX format from [deepseek-ai/DeepSeek-Prover-V2-7B](https://huggingface.co/deepseek-ai/DeepSeek-Prover-V2-7B) using mlx-lm version **0.22.3**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("cnfusion/DeepSeek-Prover-V2-7B-mlx-8Bit") prompt="hello" if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```
osanohfelizquc/gbsfdgtb
osanohfelizquc
2025-05-03T14:19:36Z
0
0
null
[ "license:bsd-2-clause", "region:us" ]
null
2025-05-03T14:19:36Z
--- license: bsd-2-clause ---
apriasmoro/fa265a4a-9da9-4622-ad17-47d795a1d0fe
apriasmoro
2025-05-03T14:19:10Z
0
0
peft
[ "peft", "safetensors", "phi3", "axolotl", "generated_from_trainer", "custom_code", "base_model:microsoft/Phi-3-mini-4k-instruct", "base_model:adapter:microsoft/Phi-3-mini-4k-instruct", "license:mit", "region:us" ]
null
2025-05-03T14:15:41Z
--- library_name: peft license: mit base_model: microsoft/Phi-3-mini-4k-instruct tags: - axolotl - generated_from_trainer model-index: - name: fa265a4a-9da9-4622-ad17-47d795a1d0fe results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.5.2` ```yaml adapter: lora base_model: microsoft/Phi-3-mini-4k-instruct bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - f8164dbb54597854_train_data.json ds_type: json format: custom path: /workspace/input_data/f8164dbb54597854_train_data.json type: field_input: description field_instruction: article field_output: reference field_system: None format: None no_input_format: None system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: apriasmoro/fa265a4a-9da9-4622-ad17-47d795a1d0fe hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 10 micro_batch_size: 2 mlflow_experiment_name: f8164dbb54597854_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 40b4e886-e6cd-4d53-9dbf-7bfd3907faf7 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 40b4e886-e6cd-4d53-9dbf-7bfd3907faf7 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # fa265a4a-9da9-4622-ad17-47d795a1d0fe This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.1108 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 3.1906 | 0.0004 | 1 | 3.0166 | | 3.1423 | 0.0012 | 3 | 2.9607 | | 2.8142 | 0.0024 | 6 | 2.5993 | | 2.3248 | 0.0036 | 9 | 2.1108 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.3 - Pytorch 2.5.1+cu124 - Datasets 3.1.0 - Tokenizers 0.20.3
mlfoundations-dev/d1_math_shortest_0.3k
mlfoundations-dev
2025-05-03T14:17:20Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "llama-factory", "full", "generated_from_trainer", "conversational", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-7B-Instruct", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T12:41:25Z
--- library_name: transformers license: apache-2.0 base_model: Qwen/Qwen2.5-7B-Instruct tags: - llama-factory - full - generated_from_trainer model-index: - name: d1_math_shortest_0.3k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # d1_math_shortest_0.3k This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/d1_math_shortest_0.3k dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - total_eval_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 13.0 ### Training results ### Framework versions - Transformers 4.46.1 - Pytorch 2.6.0+cu124 - Datasets 3.1.0 - Tokenizers 0.20.3
MrRobotoAI/A20
MrRobotoAI
2025-05-03T14:17:13Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2212.04089", "base_model:MrRobotoAI/A1", "base_model:merge:MrRobotoAI/A1", "base_model:MrRobotoAI/A2", "base_model:merge:MrRobotoAI/A2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T14:14:16Z
--- base_model: - MrRobotoAI/A1 - MrRobotoAI/A2 library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Task Arithmetic](https://arxiv.org/abs/2212.04089) merge method using [MrRobotoAI/A2](https://huggingface.co/MrRobotoAI/A2) as a base. ### Models Merged The following models were included in the merge: * [MrRobotoAI/A1](https://huggingface.co/MrRobotoAI/A1) ### Configuration The following YAML configuration was used to produce this model: ```yaml merge_method: task_arithmetic models: - model: MrRobotoAI/A2 parameters: weight: - filter: v_proj value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8] - filter: o_proj value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8] - filter: up_proj value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8] - filter: gate_proj value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8] - filter: down_proj value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8] - value: 1 - model: MrRobotoAI/A1 parameters: weight: - filter: v_proj value: [0.2, 0.2, 0.5, 0.4, 0.3, 0.2, 0.3, 0.4, 0.5, 0.2, 0.2] - filter: o_proj value: [0.2, 0.2, 0.5, 0.4, 0.3, 0.2, 0.3, 0.4, 0.5, 0.2, 0.2] - filter: up_proj value: [0.2, 0.2, 0.5, 0.4, 0.3, 0.2, 0.3, 0.4, 0.5, 0.2, 0.2] - filter: gate_proj value: [0.2, 0.2, 0.5, 0.4, 0.3, 0.2, 0.3, 0.4, 0.5, 0.2, 0.2] - filter: down_proj value: [0.2, 0.2, 0.5, 0.4, 0.3, 0.2, 0.3, 0.4, 0.5, 0.2, 0.2] - value: 0 base_model: MrRobotoAI/A2 dtype: bfloat16 ```
cnfusion/DeepSeek-Prover-V2-7B-mlx-fp16
cnfusion
2025-05-03T14:16:53Z
0
0
mlx
[ "mlx", "safetensors", "llama", "base_model:deepseek-ai/DeepSeek-Prover-V2-7B", "base_model:finetune:deepseek-ai/DeepSeek-Prover-V2-7B", "region:us" ]
null
2025-05-03T14:16:07Z
--- base_model: deepseek-ai/DeepSeek-Prover-V2-7B tags: - mlx --- # cnfusion/DeepSeek-Prover-V2-7B-mlx-fp16 The Model [cnfusion/DeepSeek-Prover-V2-7B-mlx-fp16](https://huggingface.co/cnfusion/DeepSeek-Prover-V2-7B-mlx-fp16) was converted to MLX format from [deepseek-ai/DeepSeek-Prover-V2-7B](https://huggingface.co/deepseek-ai/DeepSeek-Prover-V2-7B) using mlx-lm version **0.22.3**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("cnfusion/DeepSeek-Prover-V2-7B-mlx-fp16") prompt="hello" if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```
Membersuger/Euro_17
Membersuger
2025-05-03T14:10:24Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T14:02:36Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kitswas/Gemma-3-4b-finetuned-bengali-wikidata
kitswas
2025-05-03T14:09:27Z
0
0
null
[ "safetensors", "region:us" ]
null
2025-05-03T14:07:46Z
--- language: bn license: cc-by-nc-4.0 datasets: - https://www.kaggle.com/datasets/saurabhshahane/bangla-wikipedia --- # Gemma-3-4b Fine-tuned on Bengali Wikipedia
apriasmoro/82359c99-f769-46dd-842f-9b9c4b94ba6c
apriasmoro
2025-05-03T14:08:25Z
0
0
peft
[ "peft", "safetensors", "phi3", "axolotl", "generated_from_trainer", "custom_code", "base_model:microsoft/Phi-3-mini-4k-instruct", "base_model:adapter:microsoft/Phi-3-mini-4k-instruct", "license:mit", "region:us" ]
null
2025-05-03T14:04:58Z
--- library_name: peft license: mit base_model: microsoft/Phi-3-mini-4k-instruct tags: - axolotl - generated_from_trainer model-index: - name: 82359c99-f769-46dd-842f-9b9c4b94ba6c results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.5.2` ```yaml adapter: lora base_model: microsoft/Phi-3-mini-4k-instruct bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - f8164dbb54597854_train_data.json ds_type: json format: custom path: /workspace/input_data/f8164dbb54597854_train_data.json type: field_input: description field_instruction: article field_output: reference field_system: None format: None no_input_format: None system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: apriasmoro/82359c99-f769-46dd-842f-9b9c4b94ba6c hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 10 micro_batch_size: 2 mlflow_experiment_name: /tmp/f8164dbb54597854_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 40b4e886-e6cd-4d53-9dbf-7bfd3907faf7 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 40b4e886-e6cd-4d53-9dbf-7bfd3907faf7 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 82359c99-f769-46dd-842f-9b9c4b94ba6c This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.1244 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 3.1906 | 0.0004 | 1 | 3.0166 | | 3.1591 | 0.0012 | 3 | 2.9764 | | 2.8282 | 0.0024 | 6 | 2.6155 | | 2.3305 | 0.0036 | 9 | 2.1244 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.3 - Pytorch 2.5.1+cu124 - Datasets 3.1.0 - Tokenizers 0.20.3
MrRobotoAI/A16
MrRobotoAI
2025-05-03T14:05:05Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2212.04089", "base_model:MrRobotoAI/137", "base_model:merge:MrRobotoAI/137", "base_model:MrRobotoAI/A2", "base_model:merge:MrRobotoAI/A2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T14:02:04Z
--- base_model: - MrRobotoAI/137 - MrRobotoAI/A2 library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Task Arithmetic](https://arxiv.org/abs/2212.04089) merge method using [MrRobotoAI/A2](https://huggingface.co/MrRobotoAI/A2) as a base. ### Models Merged The following models were included in the merge: * [MrRobotoAI/137](https://huggingface.co/MrRobotoAI/137) ### Configuration The following YAML configuration was used to produce this model: ```yaml merge_method: task_arithmetic models: - model: MrRobotoAI/A2 parameters: weight: - filter: v_proj value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8] - filter: o_proj value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8] - filter: up_proj value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8] - filter: gate_proj value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8] - filter: down_proj value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8] - value: 1 - model: MrRobotoAI/137 parameters: weight: - filter: v_proj value: [0.2, 0.2, 0.5, 0.4, 0.3, 0.2, 0.3, 0.4, 0.5, 0.2, 0.2] - filter: o_proj value: [0.2, 0.2, 0.5, 0.4, 0.3, 0.2, 0.3, 0.4, 0.5, 0.2, 0.2] - filter: up_proj value: [0.2, 0.2, 0.5, 0.4, 0.3, 0.2, 0.3, 0.4, 0.5, 0.2, 0.2] - filter: gate_proj value: [0.2, 0.2, 0.5, 0.4, 0.3, 0.2, 0.3, 0.4, 0.5, 0.2, 0.2] - filter: down_proj value: [0.2, 0.2, 0.5, 0.4, 0.3, 0.2, 0.3, 0.4, 0.5, 0.2, 0.2] - value: 0 base_model: MrRobotoAI/A2 dtype: bfloat16 ```
Hachipo/Meta-Llama-3-8B-EnTrans_1000_2
Hachipo
2025-05-03T14:04:00Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "sft", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T14:00:13Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
cernigyerkelty7v/dfgbfgb
cernigyerkelty7v
2025-05-03T14:03:12Z
0
0
null
[ "license:bsd-3-clause", "region:us" ]
null
2025-05-03T14:03:12Z
--- license: bsd-3-clause ---
Hamzah-Asadullah/Failed-FPFT-0.6B
Hamzah-Asadullah
2025-05-03T14:01:00Z
0
1
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "text2text-generation", "en", "base_model:Qwen/Qwen3-0.6B", "base_model:finetune:Qwen/Qwen3-0.6B", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2025-05-03T09:47:16Z
--- license: apache-2.0 language: - en base_model: - Qwen/Qwen3-0.6B pipeline_tag: text2text-generation library_name: transformers tags: - conversational --- > [!NOTE] > Consider supporting me [here](https://ko-fi.com/hamzahasadullah) 🎉 > Try out my assistant for free [here](https://xetute.github.io/) **Failed Full Parameter Fine Tune** A fine-tuned version of the base-model linked. It's performance is better than the base model; though it is not as high as I expected, hence this finetuned is failed. This model can be used for enhanced instruction-following, STEM-based questions, storywriting and roleplay, and so on. You can find the data used to train this model under `data.json`. For finetuning, the Qwen3 chat template was used. **Happy chatting.** <div style="display: flex; flex-direction: column; justify-content: center; align-items: left; font-size: 1rem; padding: 20px;"> <div style="display: flex; flex-direction: row; align-items: center; margin: 10px; margin-left: 0; padding: 0;"> <img src="https://xetute.github.io/favicon.ico" style="margin: 0; border-radius: 50%; height: 2rem;"/> <h2 style="margin: 0; margin-left: 10px;">XeTute Technologies</h2> </div> <div style="display: flex; flex-direction: row; gap: 5px; margin: 0; max-width: 500px;"> XeTute Technologies is an unofficial Pakistani organisation created by <a href="https://huggingface.co/Hamzah-Asadullah">Hamzah Asadullah.</a> </div> <h2 style="margin: 5px; margin-top: 20px; margin-left: 0;">Links</h2> <div style="display: flex; flex-direction: row; word-break: none; gap: 5px;"> <a href="https://huggingface.co/XeTute">HuggingFace</a> <a href="https://github.com/XeTute">GitHub</a> </div> <div style="display: flex; flex-direction: row; word-break: none; gap: 5px;"> <a href="https://ko-fi.com/hamzahasadullah">Buy me a Coffee</a> <a href="https://xetute.github.io">Apex Webpage</a> </div> <h2 style="margin: 5px; margin-top: 20px; margin-left: 0;">Pakistan</h2> Pakistan is a country in South-Asia known for its rich culture despite the British, its stunning landscape, and PAF (Pakistan Armed Forces), its military. Long live the Islamic Republic of Pakistan.<br> <img src="https://upload.wikimedia.org/wikipedia/commons/3/32/Flag_of_Pakistan.svg" style="width: 85%; max-width: 512px; border-radius: 25px;"/> </div>
JasonTree/Qwen2.5-base-0502
JasonTree
2025-05-03T14:00:33Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "grpo", "arxiv:2402.03300", "base_model:Qwen/Qwen2.5-7B", "base_model:finetune:Qwen/Qwen2.5-7B", "endpoints_compatible", "region:us" ]
null
2025-05-02T22:19:54Z
--- base_model: Qwen/Qwen2.5-7B library_name: transformers model_name: Qwen2.5-base-0502 tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for Qwen2.5-base-0502 This model is a fine-tuned version of [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="JasonTree/Qwen2.5-base-0502", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/alelab/QuiteGive/runs/09e3tt13) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.16.1 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.3.2 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
kostiantynk1205/9656cf69-b3a7-480f-be3c-b8c24eaa29fe
kostiantynk1205
2025-05-03T13:56:38Z
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "dataset:3849833cfcaf4c21_train_data.json", "base_model:Qwen/Qwen2-0.5B", "base_model:adapter:Qwen/Qwen2-0.5B", "region:us" ]
null
2025-05-03T13:56:16Z
--- library_name: peft tags: - generated_from_trainer datasets: - 3849833cfcaf4c21_train_data.json base_model: Qwen/Qwen2-0.5B model-index: - name: kostiantynk1205/9656cf69-b3a7-480f-be3c-b8c24eaa29fe results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # kostiantynk1205/9656cf69-b3a7-480f-be3c-b8c24eaa29fe This model was trained from scratch on the /workspace/input_data/3849833cfcaf4c21_train_data.json dataset. It achieves the following results on the evaluation set: - Loss: 1.2973 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ### Framework versions - PEFT 0.15.2 - Transformers 4.51.3 - Pytorch 2.5.1+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
mlfoundations-dev/opencodereasoning_32B
mlfoundations-dev
2025-05-03T13:55:03Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "llama-factory", "full", "generated_from_trainer", "conversational", "base_model:Qwen/Qwen2.5-32B-Instruct", "base_model:finetune:Qwen/Qwen2.5-32B-Instruct", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T13:43:55Z
--- library_name: transformers license: other base_model: Qwen/Qwen2.5-32B-Instruct tags: - llama-factory - full - generated_from_trainer model-index: - name: opencodereasoning_32B results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opencodereasoning_32B This model is a fine-tuned version of [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) on the mlfoundations-dev/opencodereasoning dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 8e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 512 - total_train_batch_size: 512 - total_eval_batch_size: 4096 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.46.1 - Pytorch 2.6.0+cu124 - Datasets 3.1.0 - Tokenizers 0.20.0
VridhiJain/roberta_bayesian
VridhiJain
2025-05-03T13:52:23Z
0
0
transformers
[ "transformers", "safetensors", "roberta", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-05-03T13:52:08Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
duandongsheng/ddpm-celebahq-finetuned-butterflies-2epochs
duandongsheng
2025-05-03T13:51:37Z
0
0
diffusers
[ "diffusers", "safetensors", "pytorch", "unconditional-image-generation", "diffusion-models-class", "license:mit", "diffusers:DDPMPipeline", "region:us" ]
unconditional-image-generation
2025-05-03T13:38:16Z
--- license: mit tags: - pytorch - diffusers - unconditional-image-generation - diffusion-models-class --- # Example Fine-Tuned Model for Unit 2 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) Describe your model here ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('duandongsheng/ddpm-celebahq-finetuned-butterflies-2epochs') image = pipeline().images[0] image ```
Bondds/82_Bondds_13_590
Bondds
2025-05-03T13:50:55Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T13:46:19Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
bean980310/minami-asuka-xl-animagine-xl-4-v1
bean980310
2025-05-03T13:50:48Z
4
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:cagliostrolab/animagine-xl-4.0", "base_model:adapter:cagliostrolab/animagine-xl-4.0", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2025-04-22T12:37:12Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: >- 1girl, Minami Asuka, sensitive, year 2025, outdoors, stage, stage lights, solo, tomboy, very short hair, red hair, spiked hair, very handsome face, heterochromia, blue eyes, yellow eyes, large breasts, a adult tomboy girl with very boyish handsome shortcut red hair with (spiked hair:1.2) and very handsome face and heterochromia with blue eyes and yellow eyes and perfect female body and large breasts wearing red idol uniform and idol clothes and pleated miniskirt and gloves and thighhighs and high heels and standing and singing with grab microphone, masterpiece, high score, great score, absurdres parameters: negative_prompt: >- old, early, mid, grass, lear, girly hair, muscular body, mutation, extra limbs, extra legs, extra arms, extra hands, missing limbs, missing legs, missing arms, missing hands, headwear, hat, cap, beret, hairpin, hairband, hair ribbon, hair ornament, hairclip, lowres, bad anatomy, bad hands, text, error, missing finger, extra digits, fewer digits, cropped, worst quality, low quality, low score, bad score, average score, signature, watermark, username, blurry output: url: images/ComfyUI_facefix_00022_.png - text: >- 1girl, Minami Asuka, sensitive, year 2025, outdoors, town, solo, tomboy, very short hair, red hair, spiked hair, very handsome face, heterochromia, blue eyes, yellow eyes, large breasts, a adult tomboy girl with very boyish handsome shortcut red hair with (spiked hair:1.2) and very handsome face and heterochromia with blue eyes and yellow eyes and perfect female body and large breasts wearing white military uniform with epaulettes and aiguillette and military blue skirt and military white legwear and military white footwear and military white jacket with white long sleeves and white buttoned shirt and white lace trim bridal gloves and white cape with shoulder cape and brown belt with gold belt buckle and pleated blue miniskirt with white stripes and white thighhighs with floral lace trim and white high heels and standing with legs together and hand on hip and serious face, masterpiece, high score, great score, absurdres parameters: negative_prompt: >- old, early, mid, grass, lear, girly hair, muscular body, mutation, extra limbs, extra legs, extra arms, extra hands, missing limbs, missing legs, missing arms, missing hands, headwear, hat, cap, beret, hairpin, hairband, hair ribbon, hair ornament, hairclip, lowres, bad anatomy, bad hands, text, error, missing finger, extra digits, fewer digits, cropped, worst quality, low quality, low score, bad score, average score, signature, watermark, username, blurry output: url: images/ComfyUI_upscale_00019_.png - text: >- 1girl, Minami Asuka, sensitive, year 2025, indoors, teacher's room, solo, tomboy, very short hair, red hair, spiked hair, very handsome face, heterochromia, blue eyes, yellow eyes, large breasts, a adult tomboy girl with very boyish handsome shortcut red hair with (spiked hair:1.2) and very handsome face and heterochromia with blue eyes and yellow eyes and perfect female body and large breasts wearing teacher suit and tight miniskirt and shirt and pantystocking and high heels and sitting on chair with crossed legs and closed mouth, masterpiece, high score, great score, absurdres parameters: negative_prompt: >- old, early, mid, grass, lear, girly hair, muscular body, mutation, extra limbs, extra legs, extra arms, extra hands, missing limbs, missing legs, missing arms, missing hands, headwear, hat, cap, beret, hairpin, hairband, hair ribbon, hair ornament, hairclip, lowres, bad anatomy, bad hands, text, error, missing finger, extra digits, fewer digits, cropped, worst quality, low quality, low score, bad score, average score, signature, watermark, username, blurry output: url: images/ComfyUI_upscale_00025_.png base_model: cagliostrolab/animagine-xl-4.0 instance_prompt: >- Minami Asuka, tomboy, very short hair, red hair, spiked hair, very handsome face, heterochromia, blue eyes, yellow eyes, large breasts, shortcut red hair with (spiked hair:1.2) and very handsome face and heterochromia with blue eyes and yellow eyes and perfect female body and large breasts license: creativeml-openrail-m --- # Original Character - 南飛鳥(Minami Asuka) XL for Animagine XL 4.0 v1 <Gallery /> ## Model description ![ComfyUI_facefix_00022_.png](https:&#x2F;&#x2F;cdn-uploads.huggingface.co&#x2F;production&#x2F;uploads&#x2F;657be187c8c6e8dceecc3720&#x2F;bk9cxEthYjhCulUpcCmH3.png) ![ComfyUI_upscale_00019_.png](https:&#x2F;&#x2F;cdn-uploads.huggingface.co&#x2F;production&#x2F;uploads&#x2F;657be187c8c6e8dceecc3720&#x2F;Ula2O01O3ryY_hSVe9PHl.png) ![ComfyUI_upscale_00025_.png](https:&#x2F;&#x2F;cdn-uploads.huggingface.co&#x2F;production&#x2F;uploads&#x2F;657be187c8c6e8dceecc3720&#x2F;g3r6reD29AKt9G8t-J3Ba.png) Asuka Minami, the tomboyish idol girl for Animagine XL 4.0 **Trained Data** Steps:5110 Epoch:10 Clip Skip:2 Images:680 Training Model: cagliostrolab&#x2F;animagine-xl-4.0-zero Learning rate: 1e-4 Unet Learning rate : 1e-4 TE Learning rate :1e-5 LR Scheduler: Cosine with restarts Network Dim: 32 Network Alpha: 16 Train Batch Size: 4 Mixed Precision: bf16 Optimizer Args: scale_parameter&#x3D;False relative_step&#x3D;False warmup_init&#x3D;False ## Trigger words You should use `Minami Asuka` to trigger the image generation. You should use `tomboy` to trigger the image generation. You should use `very short hair` to trigger the image generation. You should use `red hair` to trigger the image generation. You should use `spiked hair` to trigger the image generation. You should use `very handsome face` to trigger the image generation. You should use `heterochromia` to trigger the image generation. You should use `blue eyes` to trigger the image generation. You should use `yellow eyes` to trigger the image generation. You should use `large breasts` to trigger the image generation. You should use `a adult tomboy girl with very boyish handsome shortcut red hair with (spiked hair:1.2) and very handsome face and heterochromia with blue eyes and yellow eyes and perfect female body and large breasts` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/bean980310/minami-asuka-xl-animagine-xl-4-v1/tree/main) them in the Files & versions tab.
ahmedch28/mistral_7b_finetuned_sf_no_pr_v3
ahmedch28
2025-05-03T13:43:57Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-03T13:43:49Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
JUN-SUZU/DiscordAntiSpam
JUN-SUZU
2025-05-03T13:43:42Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "autotrain", "base_model:google-bert/bert-base-multilingual-uncased", "base_model:finetune:google-bert/bert-base-multilingual-uncased", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-05-03T09:32:11Z
--- library_name: transformers tags: - autotrain - text-classification base_model: google-bert/bert-base-multilingual-uncased widget: - text: "I love AutoTrain" --- # Model Trained Using AutoTrain - Problem type: Text Classification ## Validation Metrics loss: 0.09711610525846481 f1: 0.9752066115702479 precision: 0.9833333333333333 recall: 0.9672131147540983 auc: 0.9964842978471262 accuracy: 0.9764521193092621
memeviss/zombieIX_9
memeviss
2025-05-03T13:35:54Z
0
0
null
[ "safetensors", "region:us" ]
null
2025-05-03T11:30:57Z
# Optimized TTS Model This model has been optimized for 100% TOP1 performance using advanced parameter enhancement techniques. ## Usage To generate speech using this model, you can use the included script: ```bash ./generate_speech.py --text "Your text here" --output_path output.wav ``` For more details, see the optimization report in this directory.
memeviss/zombieIX_3
memeviss
2025-05-03T13:33:49Z
0
0
null
[ "safetensors", "region:us" ]
null
2025-05-03T11:30:54Z
# Optimized TTS Model This model has been optimized for 100% TOP1 performance using advanced parameter enhancement techniques. ## Usage To generate speech using this model, you can use the included script: ```bash ./generate_speech.py --text "Your text here" --output_path output.wav ``` For more details, see the optimization report in this directory.
nielso/my_awesome_eli5_clm-model
nielso
2025-05-03T13:27:30Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "dataset:eli5_category", "base_model:distilbert/distilgpt2", "base_model:finetune:distilbert/distilgpt2", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T02:37:05Z
--- library_name: transformers license: apache-2.0 base_model: distilbert/distilgpt2 tags: - generated_from_trainer datasets: - eli5_category model-index: - name: my_awesome_eli5_clm-model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_eli5_clm-model This model is a fine-tuned version of [distilbert/distilgpt2](https://huggingface.co/distilbert/distilgpt2) on the eli5_category dataset. It achieves the following results on the evaluation set: - Loss: 3.6904 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 64 | 3.6892 | | No log | 2.0 | 128 | 3.6899 | | No log | 3.0 | 192 | 3.6904 | ### Framework versions - Transformers 4.52.0.dev0 - Pytorch 2.7.0+cpu - Datasets 3.5.1 - Tokenizers 0.21.0
19uez/GRPO_llama3_2_3B_16_005_1k_part1
19uez
2025-05-03T13:25:52Z
0
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "unsloth", "trl", "grpo", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T13:24:54Z
--- library_name: transformers tags: - unsloth - trl - grpo --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
junejeong/A1-llm1B-1B-unsloth-bnb-4bit
junejeong
2025-05-03T13:25:44Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/Llama-3.2-1B-unsloth-bnb-4bit", "base_model:finetune:unsloth/Llama-3.2-1B-unsloth-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-05-03T13:23:48Z
--- base_model: unsloth/Llama-3.2-1B-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** junejeong - **License:** apache-2.0 - **Finetuned from model :** unsloth/Llama-3.2-1B-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
openfree/sergey-lazarev
openfree
2025-05-03T13:23:17Z
0
0
diffusers
[ "diffusers", "text-to-image", "flux", "lora", "template:sd-lora", "ai-toolkit", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-05-03T13:23:12Z
--- tags: - text-to-image - flux - lora - diffusers - template:sd-lora - ai-toolkit widget: - text: 'A Sergey Lazarev as USA president, rainbow hair color, pink suit, 26K ' output: url: samples/1746278529264__000001111_0.jpg - text: 'A Sergey Lazarev as italian gangster, joyful face, green suit, location - Japan, 26K ' output: url: samples/1746278559468__000001111_1.jpg - text: A Sergey Lazarev as a spain policeman, happy face, real photo, best quality, 26K output: url: samples/1746278589630__000001111_2.jpg base_model: black-forest-labs/FLUX.1-dev instance_prompt: Sergey Lazarev license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- # sergey-lazarev Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) <Gallery /> ## Trigger words You should use `Sergey Lazarev` to trigger the image generation. ## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc. Weights for this model are available in Safetensors format. [Download](/openfree/sergey-lazarev/tree/main) them in the Files & versions tab. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to('cuda') pipeline.load_lora_weights('openfree/sergey-lazarev', weight_name='sergey-lazarev.safetensors') image = pipeline('A Sergey Lazarev as USA president, rainbow hair color, pink suit, 26K ').images[0] image.save("my_image.png") ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
Gyanas621/GPU.Net
Gyanas621
2025-05-03T13:22:39Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-03T13:22:39Z
--- license: apache-2.0 ---
Romain-XV/f034196d-d999-433d-8fbd-4d19ffbd1540
Romain-XV
2025-05-03T13:21:02Z
0
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "axolotl", "dpo", "trl", "conversational", "arxiv:2305.18290", "base_model:Qwen/Qwen2-7B-Instruct", "base_model:finetune:Qwen/Qwen2-7B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T12:35:30Z
--- base_model: Qwen/Qwen2-7B-Instruct library_name: transformers model_name: f034196d-d999-433d-8fbd-4d19ffbd1540 tags: - generated_from_trainer - axolotl - dpo - trl licence: license --- # Model Card for f034196d-d999-433d-8fbd-4d19ffbd1540 This model is a fine-tuned version of [Qwen/Qwen2-7B-Instruct](https://huggingface.co/Qwen/Qwen2-7B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Romain-XV/f034196d-d999-433d-8fbd-4d19ffbd1540", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/romain_fnc-xventures/Gradients-On-Demand/runs/jky3bhfl) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.12.0.dev0 - Transformers: 4.46.0 - Pytorch: 2.5.0+cu124 - Datasets: 3.0.1 - Tokenizers: 0.20.1 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
cnfusion/UIGEN-T2-7B-mlx-8Bit
cnfusion
2025-05-03T13:20:29Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "text-generation-inference", "ui-generation", "peft", "lora", "tailwind-css", "html", "mlx", "mlx-my-repo", "conversational", "en", "base_model:Tesslate/UIGEN-T2-7B", "base_model:adapter:Tesslate/UIGEN-T2-7B", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "8-bit", "region:us" ]
text-generation
2025-05-03T13:19:57Z
--- base_model: Tesslate/UIGEN-T2-7B tags: - text-generation-inference - transformers - qwen2 - ui-generation - peft - lora - tailwind-css - html - mlx - mlx-my-repo license: apache-2.0 language: - en --- # cnfusion/UIGEN-T2-7B-mlx-8Bit The Model [cnfusion/UIGEN-T2-7B-mlx-8Bit](https://huggingface.co/cnfusion/UIGEN-T2-7B-mlx-8Bit) was converted to MLX format from [Tesslate/UIGEN-T2-7B](https://huggingface.co/Tesslate/UIGEN-T2-7B) using mlx-lm version **0.22.3**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("cnfusion/UIGEN-T2-7B-mlx-8Bit") prompt="hello" if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```
bbehrang/vaddosgus
bbehrang
2025-05-03T13:13:56Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-05-03T12:42:10Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: VADDOSGUS --- # Vaddosgus <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `VADDOSGUS` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "VADDOSGUS", "lora_weights": "https://huggingface.co/bbehrang/vaddosgus/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('bbehrang/vaddosgus', weight_name='lora.safetensors') image = pipeline('VADDOSGUS').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 1000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/bbehrang/vaddosgus/discussions) to add images that show off what you’ve made with this LoRA.
underfech/behzaad
underfech
2025-05-03T13:12:17Z
0
0
null
[ "license:bigcode-openrail-m", "region:us" ]
null
2025-05-03T13:12:17Z
--- license: bigcode-openrail-m ---
vangard703/single_image_11_tasks_three_tasks_qa_5_tasks
vangard703
2025-05-03T13:10:22Z
0
0
transformers
[ "transformers", "safetensors", "qwen2_5_vl", "image-text-to-text", "conversational", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-05-03T13:07:08Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
dgambettaphd/M_llm2_gen5_WXS_doc1000_synt64_lr1e-04_acm_SYNLAST
dgambettaphd
2025-05-03T13:09:03Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-03T13:08:51Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
zaqivan/zaqzaq
zaqivan
2025-05-03T13:03:13Z
0
0
null
[ "license:bigcode-openrail-m", "region:us" ]
null
2025-05-03T13:03:11Z
--- license: bigcode-openrail-m ---
HarethahMo/qwen2.5-3b-extended-refusal-4-21-abliterated
HarethahMo
2025-05-03T13:01:24Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T12:44:23Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MinaMila/phi3_LoRa_ACSEmployment_2_cfda_ep3_22
MinaMila
2025-05-03T13:00:27Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-01T22:57:00Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Thomaswdwd/legal-reasoning-qwen
Thomaswdwd
2025-05-03T12:59:59Z
0
0
null
[ "safetensors", "region:us" ]
null
2025-05-03T12:59:40Z
# legal-reasoning-qwen ## Model Description This model is a fine-tuned version of Qwen/QwQ-32B specialized for legal reasoning tasks with a specific format: ``` <think> Detailed legal reasoning process </think> Final answer and conclusion ``` ## Training The model was fine-tuned using LoRA with the following configuration: - r=16 - lora_alpha=32 - Target modules: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj The model is in 4-bit quantization format with separate adapter weights.
trumtruyen/trumtruyen
trumtruyen
2025-05-03T12:59:06Z
0
0
null
[ "license:bigcode-openrail-m", "region:us" ]
null
2025-05-03T12:59:06Z
--- license: bigcode-openrail-m ---
ASethi04/meta-llama-Llama-3.1-8B-gsm8k-first-lora-4-4e-05
ASethi04
2025-05-03T12:52:14Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:meta-llama/Llama-3.1-8B", "base_model:finetune:meta-llama/Llama-3.1-8B", "endpoints_compatible", "region:us" ]
null
2025-05-03T11:58:44Z
--- base_model: meta-llama/Llama-3.1-8B library_name: transformers model_name: meta-llama-Llama-3.1-8B-gsm8k-first-lora-4-4e-05 tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for meta-llama-Llama-3.1-8B-gsm8k-first-lora-4-4e-05 This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="ASethi04/meta-llama-Llama-3.1-8B-gsm8k-first-lora-4-4e-05", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/torchql-org/huggingface/runs/r5chghnz) This model was trained with SFT. ### Framework versions - TRL: 0.16.1 - Transformers: 4.51.2 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
fats-fme/483377a4-5bcc-4044-8184-abcf6ac3e249
fats-fme
2025-05-03T12:48:36Z
0
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:Qwen/Qwen2-0.5B", "base_model:adapter:Qwen/Qwen2-0.5B", "license:apache-2.0", "region:us" ]
null
2025-05-03T12:42:08Z
--- library_name: peft license: apache-2.0 base_model: Qwen/Qwen2-0.5B tags: - axolotl - generated_from_trainer model-index: - name: 483377a4-5bcc-4044-8184-abcf6ac3e249 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: Qwen/Qwen2-0.5B bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 3849833cfcaf4c21_train_data.json ds_type: json format: custom path: /workspace/input_data/3849833cfcaf4c21_train_data.json type: field_instruction: question field_output: chosen format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto early_stopping_patience: 3 eval_max_new_tokens: 128 eval_steps: 100 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: false hub_model_id: fats-fme/483377a4-5bcc-4044-8184-abcf6ac3e249 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 5.0e-05 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 128 lora_dropout: 0.1 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_memory: 0: 130GB max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/3849833cfcaf4c21_train_data.json model_type: AutoModelForCausalLM num_epochs: 10 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 100 saves_per_epoch: null sequence_len: 2048 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: ee6487bd-5d13-48de-a517-0c9964b1b9b3 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: ee6487bd-5d13-48de-a517-0c9964b1b9b3 warmup_steps: 200 weight_decay: 0.01 xformers_attention: null ``` </details><br> # 483377a4-5bcc-4044-8184-abcf6ac3e249 This model is a fine-tuned version of [Qwen/Qwen2-0.5B](https://huggingface.co/Qwen/Qwen2-0.5B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2318 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 200 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0002 | 1 | 1.3190 | | 1.2264 | 0.0195 | 100 | 1.2437 | | 1.2672 | 0.0389 | 200 | 1.2318 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
ASethi04/meta-llama-Llama-3.1-8B-legalbench-third-lora-4-0.0001-same-prompt-template
ASethi04
2025-05-03T12:37:03Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:meta-llama/Llama-3.1-8B", "base_model:finetune:meta-llama/Llama-3.1-8B", "endpoints_compatible", "region:us" ]
null
2025-05-03T12:05:38Z
--- base_model: meta-llama/Llama-3.1-8B library_name: transformers model_name: meta-llama-Llama-3.1-8B-legalbench-third-lora-4-0.0001-same-prompt-template tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for meta-llama-Llama-3.1-8B-legalbench-third-lora-4-0.0001-same-prompt-template This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="ASethi04/meta-llama-Llama-3.1-8B-legalbench-third-lora-4-0.0001-same-prompt-template", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/torchql-org/huggingface/runs/yiwqwg2r) This model was trained with SFT. ### Framework versions - TRL: 0.16.1 - Transformers: 4.51.2 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Shengkun/DarwinLM-4B-Mistral-Minitron-8B-Pruned-Masked
Shengkun
2025-05-03T12:35:22Z
7
0
null
[ "safetensors", "mistral", "license:apache-2.0", "region:us" ]
null
2025-04-13T06:28:26Z
--- license: apache-2.0 --- This is DarwinLM pruned from Mistral-Minitron-8B. The model is masked that the pruned weights are set as 0 while the remaining weights are the same as the original model. The shape of all weights are the same as the original model. ```python # To use the model from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("Shengkun/DarwinLM-4B-Mistral-Minitron-8B-Pruned-Masked") ``` **4.8B** | Methods | Avg | SciQ | PIQA | WG | ARC-E | ARC-C | HS | LogiQA | BoolQ | MMLU | |---------------------|------|------|------|------|-------|-------|------|--------|-------|------| | Mistral Minitron 8B | 72.4 | 96.5 | 80.4 | 79.5 | 83.1 | 63 | 62.4 | 33.6 | 83.9 | 69.4 | | DarwinLM 4.8B | 53.6 | 83.1 | 69.8 | 58.9 | 64.5 | 35.7 | 48.9 | 24.1 | 64.7 | 33.2 |
razor534/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-hibernating_pensive_sheep
razor534
2025-05-03T12:34:25Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am hibernating pensive sheep", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-03T09:06:00Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-hibernating_pensive_sheep tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am hibernating pensive sheep - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-hibernating_pensive_sheep This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="razor534/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-hibernating_pensive_sheep", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
ASethi04/meta-llama-Llama-3.1-8B-gsm8k-second-lora-4-0.0001-same-prompt-template
ASethi04
2025-05-03T12:30:13Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:meta-llama/Llama-3.1-8B", "base_model:finetune:meta-llama/Llama-3.1-8B", "endpoints_compatible", "region:us" ]
null
2025-05-03T11:40:56Z
--- base_model: meta-llama/Llama-3.1-8B library_name: transformers model_name: meta-llama-Llama-3.1-8B-gsm8k-second-lora-4-0.0001-same-prompt-template tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for meta-llama-Llama-3.1-8B-gsm8k-second-lora-4-0.0001-same-prompt-template This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="ASethi04/meta-llama-Llama-3.1-8B-gsm8k-second-lora-4-0.0001-same-prompt-template", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/torchql-org/huggingface/runs/jsclvsai) This model was trained with SFT. ### Framework versions - TRL: 0.16.1 - Transformers: 4.51.2 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
nice2mitya/a_5295124247
nice2mitya
2025-05-03T12:29:51Z
0
0
null
[ "license:other", "region:us" ]
null
2025-05-03T12:03:12Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md ---
abdeljalilELmajjodi/mms_tts
abdeljalilELmajjodi
2025-05-03T12:26:15Z
0
0
transformers
[ "transformers", "safetensors", "vits", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-03T12:26:02Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]