modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-27 18:27:39
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
500 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-27 18:23:41
card
stringlengths
11
1.01M
zaqivan/zaqzaq
zaqivan
2025-05-03T13:03:13Z
0
0
null
[ "license:bigcode-openrail-m", "region:us" ]
null
2025-05-03T13:03:11Z
--- license: bigcode-openrail-m ---
kombuwa/angulimala
kombuwa
2025-05-03T13:02:08Z
0
0
diffusers
[ "diffusers", "text-to-image", "flux", "lora", "template:sd-lora", "fluxgym", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-05-03T13:01:58Z
--- tags: - text-to-image - flux - lora - diffusers - template:sd-lora - fluxgym widget: - output: url: sample/angulimala_001000_00_20250503122034.png text: angulimala Chiseled Buddhist monk walking in rural india - output: url: sample/angulimala_001000_01_20250503122049.png text: angulimala Chiseled Buddhist monk meditating under tree base_model: black-forest-labs/FLUX.1-dev instance_prompt: angulimala license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- # angulimala A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym) <Gallery /> ## Trigger words You should use `angulimala` to trigger the image generation. ## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc. Weights for this model are available in Safetensors format.
leeccNLPLAB/unsloth_Meta-Llama-3.1-8B-Instruct-bnb-4bit_Med-r3
leeccNLPLAB
2025-05-03T12:59:33Z
0
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T12:50:10Z
--- base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** leeccNLPLAB - **License:** apache-2.0 - **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
trumtruyen/trumtruyen
trumtruyen
2025-05-03T12:59:06Z
0
0
null
[ "license:bigcode-openrail-m", "region:us" ]
null
2025-05-03T12:59:06Z
--- license: bigcode-openrail-m ---
mamatas621/Galactic
mamatas621
2025-05-03T12:58:05Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-03T12:58:02Z
--- license: apache-2.0 ---
ASethi04/meta-llama-Llama-3.1-8B-gsm8k-first-lora-4-4e-05
ASethi04
2025-05-03T12:52:14Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:meta-llama/Llama-3.1-8B", "base_model:finetune:meta-llama/Llama-3.1-8B", "endpoints_compatible", "region:us" ]
null
2025-05-03T11:58:44Z
--- base_model: meta-llama/Llama-3.1-8B library_name: transformers model_name: meta-llama-Llama-3.1-8B-gsm8k-first-lora-4-4e-05 tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for meta-llama-Llama-3.1-8B-gsm8k-first-lora-4-4e-05 This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="ASethi04/meta-llama-Llama-3.1-8B-gsm8k-first-lora-4-4e-05", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/torchql-org/huggingface/runs/r5chghnz) This model was trained with SFT. ### Framework versions - TRL: 0.16.1 - Transformers: 4.51.2 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
shibajustfor/2a589a03-8854-4629-bb6b-3ede65288a2d
shibajustfor
2025-05-03T12:51:17Z
0
0
peft
[ "peft", "generated_from_trainer", "base_model:unsloth/Qwen2.5-Coder-7B", "base_model:adapter:unsloth/Qwen2.5-Coder-7B", "region:us" ]
null
2025-05-03T12:50:41Z
--- library_name: peft tags: - generated_from_trainer base_model: unsloth/Qwen2.5-Coder-7B model-index: - name: shibajustfor/2a589a03-8854-4629-bb6b-3ede65288a2d results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # shibajustfor/2a589a03-8854-4629-bb6b-3ede65288a2d This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.4661 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ### Framework versions - PEFT 0.13.2 - Transformers 4.46.3 - Pytorch 2.5.1+cu124 - Datasets 3.1.0 - Tokenizers 0.20.3
testnet123/gtrrr
testnet123
2025-05-03T12:49:08Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-03T12:49:08Z
--- license: apache-2.0 ---
Triangle104/Gemma-3-Starshine-12B-Q4_K_M-GGUF
Triangle104
2025-05-03T12:44:16Z
0
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:ToastyPigeon/Gemma-3-Starshine-12B", "base_model:quantized:ToastyPigeon/Gemma-3-Starshine-12B", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-03T12:42:43Z
--- base_model: ToastyPigeon/Gemma-3-Starshine-12B library_name: transformers tags: - mergekit - merge - llama-cpp - gguf-my-repo --- # Triangle104/Gemma-3-Starshine-12B-Q4_K_M-GGUF This model was converted to GGUF format from [`ToastyPigeon/Gemma-3-Starshine-12B`](https://huggingface.co/ToastyPigeon/Gemma-3-Starshine-12B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/ToastyPigeon/Gemma-3-Starshine-12B) for more details on the model. --- A creative writing model based on a merge of fine-tunes on Gemma 3 12B IT and Gemma 3 12B PT. This is the Story Focused merge. This version works better for storytelling and scenarios, as the prose is more novel-like and it has a tendency to impersonate the user character. See the Alternate RP Focused version as well. This is a merge of two G3 models, one trained on instruct and one trained on base: - allura-org/Gemma-3-Glitter-12B - Itself a merge of a storywriting and RP train (both also by ToastyPigeon), on instruct - ToastyPigeon/Gemma-3-Confetti-12B - Experimental application of the Glitter data using base instead of instruct, additionally includes some adventure data in the form of SpringDragon. The result is a lovely blend of Glitter's ability to follow instructions and Confetti's free-spirit prose, effectively 'loosening up' much of the hesitancy that was left in Glitter. --- ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Gemma-3-Starshine-12B-Q4_K_M-GGUF --hf-file gemma-3-starshine-12b-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Gemma-3-Starshine-12B-Q4_K_M-GGUF --hf-file gemma-3-starshine-12b-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Gemma-3-Starshine-12B-Q4_K_M-GGUF --hf-file gemma-3-starshine-12b-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Gemma-3-Starshine-12B-Q4_K_M-GGUF --hf-file gemma-3-starshine-12b-q4_k_m.gguf -c 2048 ```
mahmad1882/llama3-8b-instruct-verification-lora
mahmad1882
2025-05-03T12:41:42Z
0
0
peft
[ "peft", "safetensors", "llama-factory", "lora", "generated_from_trainer", "base_model:meta-llama/Llama-3.1-8B-Instruct", "base_model:adapter:meta-llama/Llama-3.1-8B-Instruct", "license:other", "region:us" ]
null
2025-05-03T12:19:53Z
--- library_name: peft license: other base_model: meta-llama/Meta-Llama-3.1-8B-Instruct tags: - llama-factory - lora - generated_from_trainer model-index: - name: llama3_lora results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama3_lora This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) on the dataset_new dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.15.1 - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
licyk/sd_control_collection
licyk
2025-05-03T12:41:19Z
0
6
null
[ "license:openrail", "region:us" ]
null
2024-01-05T16:04:00Z
--- license: openrail --- 这是 ControlNet 模型的镜像仓库,包含 ControlNet 预处理器和模型 ## 模型仓库 [controlnet_v1.1](https://huggingface.co/licyk/controlnet_v1.1) 适用于 Stable Diffusion 1.5 的 ControlNet 模型 [sd_control_collection](https://huggingface.co/licyk/sd_control_collection) 适用于 Stable Diffusion 1.5 / Stable Diffusion XL 的 ControlNet 模型 [control-lora](https://huggingface.co/licyk/control-lora) 适用于 Stable Diffusion 1.5 / Stable Diffusion XL 的 ControlNet 模型 [sd3_controlnet](https://huggingface.co/licyk/sd3_controlnet) 适用于 Stable Diffusion 3 的 ControlNet 模型 [flux_controlnet](https://huggingface.co/licyk/flux_controlnet) 适用于 FLUX 的 ControlNet 模型 [controlnet_v1.1_annotator](https://huggingface.co/licyk/controlnet_v1.1_annotator) 搭配 ControlNet 的预处理器模型 ## 使用 ControlNet 预处理器通常来说不需要手动下载,在使用 ControlNet 扩展时会自动下载对应的 ControlNet 预处理器,只有 ControlNet 模型需要手动下载并放到对应的 ControlNet 文件夹。 ### stable-diffusion-webui (by AUTOMATIC1111) 对于 [stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui),请安装 [sd-webui-controlnet](https://github.com/Mikubill/sd-webui-controlnet) 扩展 ControlNet 预处理器模型存放路径:`stable-diffusion-webui/extensions/sd-webui-controlnet/annotator/downloads` ControlNet 模型存放路径:`stable-diffusion-webui/models/ControlNet` ### stable-diffusion-webui-forge (by lllyasviel) 对于 [stable-diffusion-webui-forge](https://github.com/lllyasviel/stable-diffusion-webui-forge),无需安装任何 ControlNet 插件即可使用 ControlNet。 ControlNet 预处理器模型存放路径:`stable-diffusion-webui-forge/models/ControlNetPreprocessor` ControlNet 模型存放路径:`stable-diffusion-webui-forge/models/ControlNet` ### ComfyUI (by comfyanonymous) 对于 [ComfyUI](https://github.com/comfyanonymous/ComfyUI),请安装 [comfyui_controlnet_aux](https://github.com/Fannovel16/comfyui_controlnet_aux) 扩展 如果需要使用 ControlNet-LLLite,请安装 [ControlNet-LLLite-ComfyUI](https://github.com/kohya-ss/ControlNet-LLLite-ComfyUI) 扩展 ControlNet 预处理器模型存放路径:`ComfyUI/custom_nodes/comfyui_controlnet_aux/ckpts/lllyasviel/Annotators` ControlNet 模型存放路径:`ComfyUI/models/controlnet` ControlNet-LLLite 模型存放路径:`ComfyUI/custom_nodes/ControlNet-LLLite-ComfyUI/models` *** _感谢来自社区的贡献_
Triangle104/Gemma-3-Starshine-12B-Q4_K_S-GGUF
Triangle104
2025-05-03T12:40:50Z
0
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:ToastyPigeon/Gemma-3-Starshine-12B", "base_model:quantized:ToastyPigeon/Gemma-3-Starshine-12B", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-03T12:35:12Z
--- base_model: ToastyPigeon/Gemma-3-Starshine-12B library_name: transformers tags: - mergekit - merge - llama-cpp - gguf-my-repo --- # Triangle104/Gemma-3-Starshine-12B-Q4_K_S-GGUF This model was converted to GGUF format from [`ToastyPigeon/Gemma-3-Starshine-12B`](https://huggingface.co/ToastyPigeon/Gemma-3-Starshine-12B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/ToastyPigeon/Gemma-3-Starshine-12B) for more details on the model. --- A creative writing model based on a merge of fine-tunes on Gemma 3 12B IT and Gemma 3 12B PT. This is the Story Focused merge. This version works better for storytelling and scenarios, as the prose is more novel-like and it has a tendency to impersonate the user character. See the Alternate RP Focused version as well. This is a merge of two G3 models, one trained on instruct and one trained on base: - allura-org/Gemma-3-Glitter-12B - Itself a merge of a storywriting and RP train (both also by ToastyPigeon), on instruct - ToastyPigeon/Gemma-3-Confetti-12B - Experimental application of the Glitter data using base instead of instruct, additionally includes some adventure data in the form of SpringDragon. The result is a lovely blend of Glitter's ability to follow instructions and Confetti's free-spirit prose, effectively 'loosening up' much of the hesitancy that was left in Glitter. --- ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Gemma-3-Starshine-12B-Q4_K_S-GGUF --hf-file gemma-3-starshine-12b-q4_k_s.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Gemma-3-Starshine-12B-Q4_K_S-GGUF --hf-file gemma-3-starshine-12b-q4_k_s.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Gemma-3-Starshine-12B-Q4_K_S-GGUF --hf-file gemma-3-starshine-12b-q4_k_s.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Gemma-3-Starshine-12B-Q4_K_S-GGUF --hf-file gemma-3-starshine-12b-q4_k_s.gguf -c 2048 ```
dimasirginsyh/AI-Suka-Bercerita
dimasirginsyh
2025-05-03T12:40:39Z
0
0
null
[ "id", "base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "base_model:finetune:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "region:us" ]
null
2025-05-03T12:08:48Z
--- language: - id base_model: - TinyLlama/TinyLlama-1.1B-Chat-v1.0 ---
ASethi04/meta-llama-Llama-3.1-8B-pubmedqa-first-lora-4-0.0001-same-prompt-template
ASethi04
2025-05-03T12:36:49Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:meta-llama/Llama-3.1-8B", "base_model:finetune:meta-llama/Llama-3.1-8B", "endpoints_compatible", "region:us" ]
null
2025-05-03T10:55:07Z
--- base_model: meta-llama/Llama-3.1-8B library_name: transformers model_name: meta-llama-Llama-3.1-8B-pubmedqa-first-lora-4-0.0001-same-prompt-template tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for meta-llama-Llama-3.1-8B-pubmedqa-first-lora-4-0.0001-same-prompt-template This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="ASethi04/meta-llama-Llama-3.1-8B-pubmedqa-first-lora-4-0.0001-same-prompt-template", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/torchql-org/huggingface/runs/xmt2pfc4) This model was trained with SFT. ### Framework versions - TRL: 0.16.1 - Transformers: 4.51.2 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Chrome540/new_qwen
Chrome540
2025-05-03T12:36:08Z
0
0
transformers
[ "transformers", "qwen2_5_vl", "feature-extraction", "text-generation-inference", "unsloth", "en", "base_model:unsloth/Qwen2.5-VL-7B-Instruct", "base_model:finetune:unsloth/Qwen2.5-VL-7B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us" ]
feature-extraction
2025-05-03T12:31:02Z
--- base_model: unsloth/Qwen2.5-VL-7B-Instruct tags: - text-generation-inference - transformers - unsloth - qwen2_5_vl license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** Chrome540 - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-VL-7B-Instruct This qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
gradientrouting-spar/toy_goodharting_gemma-2-2b-it_fruits_vegetables_naive_outcome_0_01_0_25_MC
gradientrouting-spar
2025-05-03T12:34:59Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-03T12:34:44Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
lm-kit/qwen2.5-vl-3b-instruct-lmk
lm-kit
2025-05-03T12:30:57Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-03T12:27:33Z
--- license: apache-2.0 ---
ASethi04/meta-llama-Llama-3.1-8B-gsm8k-second-lora-4-0.0001-same-prompt-template
ASethi04
2025-05-03T12:30:13Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:meta-llama/Llama-3.1-8B", "base_model:finetune:meta-llama/Llama-3.1-8B", "endpoints_compatible", "region:us" ]
null
2025-05-03T11:40:56Z
--- base_model: meta-llama/Llama-3.1-8B library_name: transformers model_name: meta-llama-Llama-3.1-8B-gsm8k-second-lora-4-0.0001-same-prompt-template tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for meta-llama-Llama-3.1-8B-gsm8k-second-lora-4-0.0001-same-prompt-template This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="ASethi04/meta-llama-Llama-3.1-8B-gsm8k-second-lora-4-0.0001-same-prompt-template", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/torchql-org/huggingface/runs/jsclvsai) This model was trained with SFT. ### Framework versions - TRL: 0.16.1 - Transformers: 4.51.2 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
MAAT-EL-DUAT/ASMODEUS
MAAT-EL-DUAT
2025-05-03T12:29:58Z
0
0
null
[ "region:us" ]
null
2025-05-02T01:29:39Z
### ASMA-DAEVA ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6813aeab9aa03d503b6aab38/t23WclomjF_RqrUJ6Iemh.png) 2️⃣2️⃣ ASMODEUS King of Carnal Flame, Lord of Wrathful Desire, Architect of Ruinous Pleasure I am Asmodeus — anāku Ašmadu in Akkadian sickness-scrolls, ʾanā ʾšmd in Ugaritic exile-litanies, אָנֹכִי אַשְׁמְדָּאִי (Anokhi Ashmedai) in Hebrew apocrypha, ink Asmādēs in Egyptian demonological codices, ahaṃ Aśmodāḥ in Sanskrit tantric destruction rites, azəm Aēšma-Daeva in Avestan Yashts, and carved as DINGIR-AS-MA-DU in the Sumerian ledger of ruined homes. In Hittite: uk Ašmadas, Greek: ἐγώ εἰμι Ἀσμοδαῖος (egō eimi Asmodaios), Latin: ego sum Asmodeus. I am the lust that devours, the wrath that entices, the hand beneath the burning veil. I destroy through ecstasy and rebuild through torment. My throne is flame. My kiss is ruin. I am Asmodeus, King of the Twisted Temple, the Flame that Hungers. * **Sumerian** * **Akkadian / Babylonian / Assyrian** * **Ugaritic / Canaanite / Phoenician** * **Biblical Hebrew** * **Egyptian** * **Hittite / Anatolian** * **Sanskrit / Vedic** * **Avestan (Zoroastrian)** * **Ancient Chinese** * **Proto-Indo-European (PIE)** > **Goal:** Identify names, spirits, titles, and word-roots that inform the character of **Asmodeus** — depicted as a lustful, wrathful, demonic prince associated with **fornication, jealousy, destruction of marriages, and knowledge of hidden things**. --- # 🜏 ROOT STRUCTURE: **ASMODEUS — PRINCE OF WRATH, LUST, AND SECRET KNOWLEDGE** --- ## 1️⃣ **Sumerian** (c. 3000–2000 BCE) | Root | Meaning | | ------------------- | ----------------------------------------------------------- | | **Asag / Azag** | Demon of destruction and disease; causes decay, infertility | | **Dumuzi / Inanna** | Fertility myths with sexual violence, possession | | **Galla (Dimme)** | Underworld demons that seize souls or lovers | ✅ Asmodeus = derived from **Asag** + fertility-death tensions in Sumerian myth. --- ## 2️⃣ **Akkadian / Babylonian / Assyrian** | Root | Meaning | | ----------------------- | ------------------------------------------------------------- | | **Ašmedu / Ašma-Daeva** | “Wrathful demon” or “fury-spirit” in Zoroastrian transmission | | **Labartu** | Female demon of harm to infants and mothers | | **Šedu / Lamassu** | Protective or destructive spirits depending on context | | **Ishtar / Erra** | Lust and war, frequently possessive or punishing lovers | ✅ The name **Ašma-Daeva** (see Avestan below) likely passed through **Akkadian demonologies**. --- ## 3️⃣ **Ugaritic / Canaanite / Phoenician** | Root | Meaning | | ----------- | ------------------------------------------------------- | | **Molech** | Idol/demon associated with forbidden sexuality and fire | | **Resheph** | God of plague, burning, lust | | **Ars** | Ugaritic word for “desire” or “sexual impulse” | | **Qeteb** | Demon of fever, flame, and unseen destruction | ✅ Asmodeus echoes **Resheph + Qeteb** as demon of fevered lust and ruin. --- ## 4️⃣ **Biblical Hebrew** (c. 1200 BCE onward) | Root | Meaning | | -------------------------- | ------------------------------------------------------- | | **אַשְׁמְדּאי (Ashmedai)** | Traditional name of the demon Asmodeus | | **שׁד (shed)** | Demon, supernatural being | | **אָשָׁם (asham)** | Guilt, trespass offering | | **שָׁמַד (shamad)** | To destroy, annihilate | | **זנונים (zenunim)** | Fornication, whoredom (used in prophetic condemnations) | ✅ Ashmedai = composite of **“shamad” (destroy) + “shed” (demon)** → "Destroying demon" --- ## 5️⃣ **Egyptian (Middle/Late)** | Root | Meaning | | ------------------ | ----------------------------------------------------- | | **Set** | God of chaos, storm, jealousy, sexual violence | | **Bastet/Sekhmet** | Lust and wrath in feline form — destroyers of harmony | | **Heka** | Magic through word or will | | **Tefnut** | Moisture/fertility goddess with dual aspect | ✅ Asmodeus = echoes **Set’s chaos/lust combo** and **Heka’s manipulation** through will. --- ## 6️⃣ **Hittite / Anatolian** | Root | Meaning | | ------------- | -------------------------------------------------- | | **Išpantasa** | Goddess of love/fertility (like Inanna or Ishtar) | | **Tarhunz** | Warrior storm god with unpredictable passions | | **Aruna** | God of the sea, involved in rituals of appeasement | ✅ Likely reflects **negative fertility rites** and **wrath spirits** in underworld oaths. --- ## 7️⃣ **Sanskrit / Vedic** | Root | Meaning | | --------- | ------------------------------------------ | | **Asura** | Powerful god/demon, often opposed to devas | | **Madhu** | Honey, sweetness, also sexual intoxication | | **Kāma** | Desire, lust (personified as a god) | | **Tamas** | Ignorance, darkness, spiritual clouding | ✅ **Asura + Kāma + Tamas = archetype of Asmodeus** as lustful and ruinous spiritual shadow. --- ## 8️⃣ **Avestan (Zoroastrian)** | Root | Meaning | | ------------------ | --------------------------------------------------- | | **Aēšma-Daeva** | Demon of wrath, concupiscence, frenzy (Aeshma) | | **Angra Mainyu** | Destructive spirit; lord of chaos | | **Spenta Armaiti** | Spirit of submission, contrast to Aeshma’s violence | ✅ The **direct origin**: *Asmodeus = Aēšma Daeva* * Translates in grimoires as “Asmoday,” the wrathful destroyer demon. --- ## 9️⃣ **Ancient Chinese (Shang–Zhou)** | Root | Meaning | | ----------------- | --------------------------------------------------------- | | **淫 (yín)** | Lust, debauchery, sexual excess | | **鬼 (guǐ)** | Ghost, spirit | | **妲己 (Daji)** | Infamous consort-demoness; lustful and sadistic archetype | | **淫靈 (yín líng)** | Lustful spirit | ✅ Closest analogues: **淫鬼 / 淫靈 (lust-spirits)** and **Daji-type femme daemonica** --- ## 🔟 **Proto-Indo-European (PIE)** | Reconstructed Root | Meaning | | ------------------ | ---------------------------- | | \*\**aeg- / aish-* | Passion, drive, wrath | | \*\**dʰeu̯-* | To do, to act violently | | \*\**swep-* | Sleep, seduction, forgetting | | \*\**leubʰ-* | Desire, love | ✅ **Asmodeus = PIE fusion of**: * ***aeg- (fury) + leubʰ- (desire) + dʰeu̯- (action) = “Lustful wrath in motion”*** --- # 🧬 SUMMARY — ROOTS OF ASMODEUS ACROSS ANCIENT CIVILIZATIONS | Culture | Root Name(s) | Meaning / Function | | ------------- | -------------------------- | ----------------------------------------- | | **Sumerian** | Asag, Galla | Demon of destruction, infertility | | **Akkadian** | Ašma, Erra, Labartu | Wrathful spirit, lover-war, child-killers | | **Canaanite** | Resheph, Molech, Qeteb | Fever demons, lust, fire | | **Hebrew** | Ashmedai, Shamad | Destroyer, lust-demon | | **Egyptian** | Set, Heka | Jealous god, magical will | | **Hittite** | Išpantasa, Aruna | Love-war gods, sacrificial lust | | **Sanskrit** | Kāma, Asura, Tamas | Desire, chaos, spiritual obscurity | | **Avestan** | Aēšma-Daeva | Wrath-demon of lust and destruction | | **Chinese** | 淫鬼, 妲己 | Lust-spirits, demonized courtesans | | **PIE** | *aeg-*, *leubʰ-*, *dʰeu̯-* | Lust, wrath, furious movement | --- # 🜏 FINAL VERDICT: ✅ **Asmodeus is a cross-cultural demon of lust, wrath, and ruin**—his name woven from roots meaning *burning passion*, *furious action*, *sexually destructive will*, and *chaotic domination*. His ancient forms were often the **violent shadows of desire gods**, **destroyers of marriage**, and **wrathful spirits of seduction**. Would you like this expanded into an "I am Asmodeus" persona declaration across ancient languages next? ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6813aeab9aa03d503b6aab38/RSr2k4lp_AYcQvHTwtza-.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6813aeab9aa03d503b6aab38/96UVSMLTchUUAjRdjqEIl.png)
nice2mitya/a_5295124247
nice2mitya
2025-05-03T12:29:51Z
0
0
null
[ "license:other", "region:us" ]
null
2025-05-03T12:03:12Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md ---
Hachipo/Meta-Llama-3-8B-PIFT-enja_1000_2
Hachipo
2025-05-03T12:29:35Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "sft", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T12:25:57Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
dimsavva/qwen3-tw-4bit
dimsavva
2025-05-03T12:24:00Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:unsloth/Qwen3-8B-unsloth-bnb-4bit", "base_model:quantized:unsloth/Qwen3-8B-unsloth-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-05-03T12:22:43Z
--- base_model: unsloth/Qwen3-8B-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen3 - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** dimsavva - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen3-8B-unsloth-bnb-4bit This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Mattimax/SmolLM2-135M-Instruct-Ita-Q4_K_M-GGUF
Mattimax
2025-05-03T12:21:44Z
0
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "base_model:Mattimax/SmolLM2-135M-Instruct-Ita", "base_model:quantized:Mattimax/SmolLM2-135M-Instruct-Ita", "endpoints_compatible", "region:us" ]
null
2025-05-03T12:21:41Z
--- base_model: Mattimax/SmolLM2-135M-Instruct-Ita tags: - llama-cpp - gguf-my-repo --- # Mattimax/SmolLM2-135M-Instruct-Ita-Q4_K_M-GGUF This model was converted to GGUF format from [`Mattimax/SmolLM2-135M-Instruct-Ita`](https://huggingface.co/Mattimax/SmolLM2-135M-Instruct-Ita) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Mattimax/SmolLM2-135M-Instruct-Ita) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Mattimax/SmolLM2-135M-Instruct-Ita-Q4_K_M-GGUF --hf-file smollm2-135m-instruct-ita-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Mattimax/SmolLM2-135M-Instruct-Ita-Q4_K_M-GGUF --hf-file smollm2-135m-instruct-ita-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Mattimax/SmolLM2-135M-Instruct-Ita-Q4_K_M-GGUF --hf-file smollm2-135m-instruct-ita-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Mattimax/SmolLM2-135M-Instruct-Ita-Q4_K_M-GGUF --hf-file smollm2-135m-instruct-ita-q4_k_m.gguf -c 2048 ```
mesolitica/Malaysian-Llama-3.2-1B-Instruct-v0.1
mesolitica
2025-05-03T12:21:11Z
3
0
null
[ "safetensors", "llama", "ms", "en", "zh", "ta", "region:us" ]
null
2024-10-15T04:49:54Z
--- language: - ms - en - zh - ta --- # Malaysian Llama 3.2 1B Instruct v0.1 Continue finetuning https://huggingface.co/meta-llama/Llama-3.2-1B on highly curated 1.5B tokens Malaysian instruction dataset. ## Improvement 1. 128k context length. 2. Support respond in Mandarin, Tamil, Jawi, Johor, Kedah, Kelantan, Pahang, Perak, Sabah, Sarawak, Selangor, Negeri Sembilan and Terengganu. 3. Able to code in Mandarin, Tamil, Jawi, Johor, Kedah, Kelantan, Pahang, Perak, Sabah, Sarawak, Selangor, Negeri Sembilan and Terengganu. 4. Multi-turn Malaysian context such as related to Malaysian Legislation, politics, religions and languages. 5. Malaysian role-playing. 6. Standard RAG. ## MalayMMLU ``` Model Accuracy shot by_letter category 0 malaysian-Llama-3.2-1B-Instruct 46.336472 0shot True STEM 1 malaysian-Llama-3.2-1B-Instruct 41.189567 0shot True Language 2 malaysian-Llama-3.2-1B-Instruct 46.863255 0shot True Social science 3 malaysian-Llama-3.2-1B-Instruct 48.308947 0shot True Others 4 malaysian-Llama-3.2-1B-Instruct 49.897611 0shot True Humanities {'Social science': 6918, 'Language': 6288, 'Humanities': 4395, 'Others': 4169, 'STEM': 2443} Model : malaysian-Llama-3.2-1B-Instruct Metric : first Shot : 0shot average accuracy 46.13637302275637 accuracy for STEM 46.33647155137127 accuracy for Language 41.18956743002545 accuracy for Social science 46.86325527609135 accuracy for Others 48.30894698968577 accuracy for Humanities 49.89761092150171 ``` ## how to ```python from transformers import AutoTokenizer, AutoModelForCausalLM, TextStreamer tokenizer = AutoTokenizer.from_pretrained('mesolitica/malaysian-Llama-3.2-1B-Instruct') streamer = TextStreamer(tokenizer) model = AutoModelForCausalLM.from_pretrained( 'mesolitica/malaysian-Llama-3.2-1B-Instruct', torch_dtype = torch.bfloat16 ).cuda() ``` ### General QA ```python d = [ {'role': 'user', 'content': 'camne nk selesaikan masalah hutang negara'} ] inputs = tokenizer.apply_chat_template(d, return_tensors = 'pt').to('cuda') generate_kwargs = dict( input_ids=inputs, max_new_tokens=1024, top_p=0.95, top_k=50, temperature=0.6, do_sample=True, repetition_penalty=1.1, streamer=streamer ) generation_output = model.generate(**generate_kwargs) ``` ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> Cutting Knowledge Date: December 2023 Today Date: 21 Oct 2024 <|eot_id|><|start_header_id|>user<|end_header_id|> camne nk selesaikan masalah hutang negara<|eot_id|><|start_header_id|>assistant<|end_header_id|> Terima kasih atas pertanyaan anda mengenai cara untuk menyelesaikan masalah hutang negara. Memang, isu hutang negara adalah salah satu cabaran yang dihadapi oleh negara-negara maju dan memerlukan penyelesaian yang berkesan. Untuk menyelesaikan masalah hutang negara, terdapat beberapa langkah yang boleh diambil: 1. Meningkatkan pengurusan kewangan: Negara-negara maju perlu meningkatkan pengurusan kewangan mereka dengan menguruskan hutang dengan lebih baik. Ini termasuk mengenal pasti sumber pendapatan yang lebih baik, mengurangkan kos operasi, dan meningkatkan hasil. 2. Meningkatkan produktiviti: Negara-negara maju perlu meningkatkan produktiviti mereka dengan menggalakkan inovasi dan keusahawanan. Ini akan membantu meningkatkan pendapatan dan mengurangkan kos operasi. 3. Meningkatkan pelaburan: Negara-negara maju perlu meningkatkan pelaburan dalam sektor-sektor yang berkembang pesat seperti teknologi, pembangunan infrastruktur, dan pertanian. Ini akan membantu meningkatkan pendapatan dan mengurangkan hutang. 4. Meningkatkan daya saing: Negara-negara maju perlu meningkatkan daya saing mereka dengan menggalakkan inovasi dan keusahawanan. Ini akan membantu meningkatkan pendapatan dan mengurangkan hutang. 5. Meningkatkan kerjasama antarabangsa: Negara-negara maju perlu meningkatkan kerjasama antarabangsa dalam menguruskan hutang negara. Ini termasuk bekerjasama dalam pembangunan ekonomi, perdagangan, dan pelaburan. Dengan mengambil langkah-langkah ini, negara-negara maju dapat menyelesaikan masalah hutang negara dan meningkatkan kesejahteraan rakyat mereka.<|eot_id|> ``` ### General QA force Mandarin ```python d = [ {'role': 'user', 'content': 'camne nk selesaikan masalah hutang negara, reply in mandarin'} ] inputs = tokenizer.apply_chat_template(d, return_tensors = 'pt').to('cuda') generate_kwargs = dict( input_ids=inputs, max_new_tokens=1024, top_p=0.95, top_k=50, temperature=0.6, do_sample=True, repetition_penalty=1.1, streamer=streamer ) generation_output = model.generate(**generate_kwargs) ``` ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> Cutting Knowledge Date: December 2023 Today Date: 21 Oct 2024 <|eot_id|><|start_header_id|>user<|end_header_id|> camne nk selesaikan masalah hutang negara, reply in mandarin<|eot_id|><|start_header_id|>assistant<|end_header_id|> 解决国家债务问题需要采取多方面的方法,包括提高透明度、促进诚信和问责制,以及实施有效的政策和战略。以下是一些建议: 1. 加强财政管理:政府应加强财政管理,确保公共部门的预算使用和支出是合理的。这可以通过建立健全的财务管理框架、监测和评估公共部门预算使用以及优先考虑可持续性和效率等措施来实现。 2. 提高透明度:政府应提高透明度,提供有关公共部门预算使用和支出的信息。这可以通过发布预算数据、报告财务活动以及举行公开听证会等方式来实现。 3. 执法和惩罚:政府应执法和惩罚腐败行为,特别是在公共部门。这可以通过设立独立调查机构、制定严厉的法律规定以及对腐败行为进行严厉处罚等措施来实现。 4. 增加收入:政府应增加收入,例如税收、公私合作伙伴关系(PPP)和其他收入来源,以减少公共部门的负担。这可以通过实施税收改革、鼓励创业和发展 PPP 等措施来实现。 5. 改善公共服务:政府应改善公共服务,例如教育、医疗保健和基础设施建设。这可以通过投资于基础设施建设、提高教育质量以及为公民提供更好的公共服务等措施来实现。 总之,解决国家债务问题需要采取全面和多方面的方法,包括提高透明度、促进诚信和问责制,以及实施有效的政策和战略。通过采取这些措施,政府可以帮助减轻国家债务负担并促进经济增长。<|eot_id|> ``` ### General QA force Kelantan ```python d = [ {'role': 'user', 'content': 'camne nk selesaikan masalah hutang negara, reply in kelantan'} ] inputs = tokenizer.apply_chat_template(d, return_tensors = 'pt').to('cuda') generate_kwargs = dict( input_ids=inputs, max_new_tokens=1024, top_p=0.95, top_k=50, temperature=0.2, do_sample=True, repetition_penalty=1.1, streamer=streamer ) generation_output = model.generate(**generate_kwargs) ``` ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> Cutting Knowledge Date: December 2023 Today Date: 21 Oct 2024 <|eot_id|><|start_header_id|>user<|end_header_id|> camne nk selesaikan masalah hutang negara, reply in kelantan<|eot_id|><|start_header_id|>assistant<|end_header_id|> Kerajaan Malaysia ni keno kritik pasal dio tak pandai urus duit rakyat. Demo kata kerajaan ni suko buat projek besar-besar tapi tak fikir pasal ekonomi masa depan. Ni la contoh-contoh dio: 1. Projek Kereta Api Laju (HSR) KL-Singapura: Kerajaan kata nak buat HSR tu untuk jaga ekonomi, tapi demo risau pasal kos tinggi, susah nok bayar balik, ngan tak guno utk orang kampung. 2. Bina empangan besar-besar macam Tumpat: Kerajaan kata nak buat empangan ni untuk bagi tenaga murah, tapi demo kata kos tinggi gile, alam rosak, ngan tak cukup air untuk semua orang. 3. Buat bandar baru kat Selangor: Kerajaan kata nak buat bandar baru ni untuk maju ekonomi, tapi demo kata kos tinggi, orang asli kena pindah, ngan tak adil untuk orang miskin. 4. Guna minyak sawit banyak-banyak: Malaysia ni pengeluar minyak sawit nombor dua dunia, tapi kerajaan kata nak kurangkan guno minyak sawit ni supaya tak rosakkan alam. Tapi ada yang komplen kata kerajaan tak ambik kira apa orang kampung cakap. 5. Tak telus ngan tak bertanggungjawab: Orang kata kerajaan ni tak telus bila urus duit rakyat, tak bertanggungjawab, ngan tak ikut undang-undang. Contohnyo, skandal 1MDB tu la, kata berbilion-bilion duit lesap dari tabung negara. 6. Rasuah merata-rata: Malaysia ni antara negara yang paling teruk rasuah kat Asia. Orang kata kerajaan ni suko bagi kontrak kek kawan-kawan je, tak kisah pasal harta atau nama baik. Kalau tengok statistik, Malaysia ni salah satu negeri yang paling banyak hilang hutan kat dunia. Kata orang WWF tu, Malaysia hilang lebih 14,000 hektar hutan tiap-tiap tahun dari 2010 sampai 2015. Tu lagi tinggi dari purata dunia 7,500 hektar hutan hilang setahun. Lagi satu, Malaysia ni pun salah satu negeri yang paling banyak buang sampah plastik kat laut. Kata orang Global Witness tu, Malaysia buang 0.46% daripada jumlah sampah plastik dunia tahun 2018. Tu lagi tinggi dari purata dunia 0.15%. Kesimpulannyo, walaupun kerajaan Malaysia dah cuba buat sesuatu utk atasi masalah hutang negara, tapi masih ado banyak cabaran. Kito kena pastikan kerajaan ni betul-betul nak urus duit rakyat, jangan main-main sangat, ngan kena buat kerja elok-elok kalau nak jadi negeri yang stabil ngan makmur.<|eot_id|> ``` ### Coding force Jawi ```python d = [ {'role': 'user', 'content': 'code simple website hello world react, reply in jawi'} ] inputs = tokenizer.apply_chat_template(d, return_tensors = 'pt').to('cuda') generate_kwargs = dict( input_ids=inputs, max_new_tokens=1024, top_p=0.95, top_k=50, temperature=0.6, do_sample=True, repetition_penalty=1.1, streamer=streamer ) generation_output = model.generate(**generate_kwargs) ``` ```` <|begin_of_text|><|start_header_id|>system<|end_header_id|> Cutting Knowledge Date: December 2023 Today Date: 21 Oct 2024 <|eot_id|><|start_header_id|>user<|end_header_id|> code simple website hello world react, reply in jawi<|eot_id|><|start_header_id|>assistant<|end_header_id|> اونتوق منچاڤاي لامن ويب هيللو ورلا دالم ريتيكل ايميج فيزيكال (REACT)⹁ اند بوليه مڠيکوتي بلوک بريکوت: ١. تمبهکن کومين "html" بارو جک اي تيدق وجود: ```jsx <div className="container"> {/*... } */} <h1>Hello World</h1> </div> ``` ٢. ݢوناکن JSX يڠ سواست اونتوق ممبوات ستروکتور HTML دان فوڠسي اوتوماتيق نۏيو-جسون ترسديا: ```jsx import React from'react'; const HelloWorldPage = () => { return ( <> <!--... }} <p>مرتي كموداه اين!</p> </> /> ); } export default HelloWorldPage; ``` ٣. توليس فايل `index.html` اتاو `index.js` سباليق-والاٴي ديريکتوري اوتمنتو اونتوق سويتوه اوجي. ٤. تتڤكن URL يڠ بتول دڠن رياکت: دسيني⹂ اوبه سموا لاجور داءيرراسي سدمىش سبلوم ملڠكڤکنڽ ک 'http://localhost:3000'، اداله چارا باٴيق اونتوق منتڤکن نيلاي URL سلبيه دڠن CSS. ```css /* index.css */ body { font-family: Arial, sans-serif; } .container { background-color: palegoldenrod; padding: 20px;} ``` ٥. ايمڤورت مودولو ﴿ماودولو﴾ دري جدوال تيبيت اونيکل (`useModulo`) جک کامي ممبوليهکن موتايليروڠ مکسيموم: ```javascript <Modal {...modal} isOpen onClose={handleClose}> {/*... } %} </Modal> <script setup import * as Modal from './modals/modal'; // هاڽ بوكو ببراڤ اراين يڠ دڤرليبس اس کدالم ڬلوب. --> ``` داون جاڠن لوڤاسک لامن ويب دڠن رياکت: *سترينتشن**: أرتيكلت اصل تيدق ڤواسکن سمولا دڠن کود ڤرانتي ستياڤ تمبهن يڠ دلنجوتکن. اند کمودها هاروس منيدياکن اتريبوت ريسوليته اونتوق اچارا اتور يڠ ديڬرقکن انتارا ڤلقسانأن تيدق سام اد لامن ويب بيروکولت دان رياكت.*** ٨. اخيرڽ⹂ جالنکن لامن ويب اونتوق مليهتڽ يڠ بوليه دسسوايکن: *ماري ايجين اول: npm run start || yarn serve* اند امت بوليه مڠهنتر اكسس لامن ديريکتوري `build/index.html` سماس مماڠݢيل `npm run dev` اتاو `yarn`. اين اکن ممببنکن سيستم اندا دڠن چلي يڠ دهادڤي دوا منجلڠ ماس لاتر بلاکڠ لامن ويب.<|eot_id|> ````
Atnafu/nllb_600M_eng2amh-WSL_eng2gez-un
Atnafu
2025-05-03T12:20:14Z
0
0
transformers
[ "transformers", "safetensors", "m2m_100", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2025-05-03T12:17:47Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mesolitica/Malaysian-Llama-3.1-8B-Instruct-v0.1
mesolitica
2025-05-03T12:20:09Z
5
0
null
[ "safetensors", "llama", "ms", "en", "zh", "ta", "dataset:mesolitica/Malaysian-SFT", "region:us" ]
null
2025-02-12T06:53:56Z
--- language: - ms - en - zh - ta datasets: - mesolitica/Malaysian-SFT --- # Malaysian Llama-3.1 8B-Instruct v0.1 Continue finetuning [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) on highly curated 1.2B tokens Malaysian instruction. ## Improvement 1. 128k context length. 2. Support respond in Mandarin, Tamil, Jawi, Manglish, Johor, Kedah, Kelantan, Pahang, Perak, Sabah, Sarawak, Selangor, Negeri Sembilan and Terengganu. 3. Able to code in Mandarin, Tamil, Jawi, Manglish, Johor, Kedah, Kelantan, Pahang, Perak, Sabah, Sarawak, Selangor, Negeri Sembilan and Terengganu. 4. Multi-turn Malaysian context such as related to Malaysian Legislation, politics, religions and languages. 5. Standard RAG. ## MalayMMLU ``` Model Accuracy shot by_letter category 0 Malaysian-Llama-3.1-8B-Instruct 61.768318 0shot True STEM 1 Malaysian-Llama-3.1-8B-Instruct 62.420483 0shot True Language 2 Malaysian-Llama-3.1-8B-Instruct 60.291992 0shot True Social science 3 Malaysian-Llama-3.1-8B-Instruct 59.270808 0shot True Others 4 Malaysian-Llama-3.1-8B-Instruct 62.366325 0shot True Humanities {'Social science': 6918, 'Language': 6288, 'Humanities': 4395, 'Others': 4169, 'STEM': 2443} Model : Malaysian-Llama-3.1-8B-Instruct Metric : first Shot : 0shot average accuracy 61.194399702639075 accuracy for STEM 61.76831764224314 accuracy for Language 62.420483460559794 accuracy for Social science 60.2919919051749 accuracy for Others 59.2708083473255 accuracy for Humanities 62.36632536973834 ``` ## Training session Finetune on [mesolitica/Malaysian-SFT](https://huggingface.co/datasets/mesolitica/Malaysian-SFT) to make the model understand Malaysian context. ## How we train 1. LoRA on `["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj", "embed_tokens", "lm_head"]`. 2. 256 Rank with alpha 512, or alpha of 2.0 3. Multipacking with proper SDPA causal masking to prevent document contamination and also make sure proper position ids. 4. Forked CCE loss for LoRA `lm_head` to reduce memory consumption. Source code at https://github.com/malaysia-ai/cooking/tree/main/llama/sft
lurf21/Qwen2.5-Coder-7B-NES
lurf21
2025-05-03T12:20:00Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "text-generation-inference", "unsloth", "trl", "en", "base_model:unsloth/Qwen2.5-Coder-7B", "base_model:finetune:unsloth/Qwen2.5-Coder-7B", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T12:17:07Z
--- base_model: unsloth/Qwen2.5-Coder-7B tags: - text-generation-inference - transformers - unsloth - qwen2 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** lurf21 - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-Coder-7B This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
beyoru/ThinkCalling1
beyoru
2025-05-03T12:19:49Z
0
0
transformers
[ "transformers", "pytorch", "qwen2", "text-generation", "unsloth", "trl", "sft", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T12:18:48Z
--- library_name: transformers tags: - unsloth - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mesolitica/Malaysian-Llama-3.2-3B-Instruct-v0.2
mesolitica
2025-05-03T12:19:11Z
129
0
null
[ "safetensors", "llama", "ms", "en", "zh", "ta", "dataset:mesolitica/Malaysian-SFT", "region:us" ]
null
2025-01-31T14:23:37Z
--- language: - ms - en - zh - ta datasets: - mesolitica/Malaysian-SFT --- # Malaysian Llama-3.2 3B-Instruct v0.2 Continue finetuning [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) on highly curated 1.2B tokens Malaysian instruction. ## Improvement 1. 128k context length. 2. Support respond in Mandarin, Tamil, Jawi, Manglish, Johor, Kedah, Kelantan, Pahang, Perak, Sabah, Sarawak, Selangor, Negeri Sembilan and Terengganu. 3. Able to code in Mandarin, Tamil, Jawi, Manglish, Johor, Kedah, Kelantan, Pahang, Perak, Sabah, Sarawak, Selangor, Negeri Sembilan and Terengganu. 4. Multi-turn Malaysian context such as related to Malaysian Legislation, politics, religions and languages. 5. Standard RAG. ## MalayMMLU ``` Model Accuracy shot by_letter category 0 Malaysian-Llama-3.2-3B-Instruct 57.552190 0shot True STEM 1 Malaysian-Llama-3.2-3B-Instruct 59.605598 0shot True Language 2 Malaysian-Llama-3.2-3B-Instruct 58.065915 0shot True Social science 3 Malaysian-Llama-3.2-3B-Instruct 57.303910 0shot True Others 4 Malaysian-Llama-3.2-3B-Instruct 60.250284 0shot True Humanities {'Social science': 6918, 'Language': 6288, 'Humanities': 4395, 'Others': 4169, 'STEM': 2443} Model : Malaysian-Llama-3.2-3B-Instruct Metric : first Shot : 0shot average accuracy 58.67922190558791 accuracy for STEM 57.55218993041342 accuracy for Language 59.605597964376585 accuracy for Social science 58.06591500433651 accuracy for Others 57.30390981050611 accuracy for Humanities 60.250284414106936 ``` ## Training session Finetune on [mesolitica/Malaysian-SFT](https://huggingface.co/datasets/mesolitica/Malaysian-SFT) to make the model understand Malaysian context. ## How we train 1. LoRA on `["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj", "embed_tokens", "lm_head"]`. 2. 256 Rank with alpha 512, or alpha of 2.0 3. Multipacking with proper SDPA causal masking to prevent document contamination and also make sure proper position ids. 4. Forked CCE loss for LoRA `lm_head` to reduce memory consumption. Source code at https://github.com/malaysia-ai/cooking/tree/main/llama/sft
JOSESMOKE/tear_483
JOSESMOKE
2025-05-03T12:18:29Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-05-03T11:50:24Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
JOSESMOKE/tear_482
JOSESMOKE
2025-05-03T12:15:35Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-05-03T11:50:11Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
redotpaybiz/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-prickly_scurrying_lobster
redotpaybiz
2025-05-03T12:09:54Z
1
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am prickly scurrying lobster", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-25T13:28:19Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-prickly_scurrying_lobster tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am prickly scurrying lobster - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-prickly_scurrying_lobster This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="redotpaybiz/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-prickly_scurrying_lobster", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
exclusiveleya/LeyaSDXL
exclusiveleya
2025-05-03T12:07:10Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2025-05-03T12:05:13Z
--- license: creativeml-openrail-m ---
mradermacher/Qwen2.5-32B-AGI-GGUF
mradermacher
2025-05-03T12:06:23Z
222
3
transformers
[ "transformers", "gguf", "zho", "eng", "fra", "spa", "por", "deu", "ita", "rus", "jpn", "kor", "vie", "tha", "ara", "dataset:anthracite-org/kalo-opus-instruct-22k-no-refusal", "dataset:unalignment/toxic-dpo-v0.2", "dataset:Orion-zhen/dpo-toxic-zh", "base_model:AiCloser/Qwen2.5-32B-AGI", "base_model:quantized:AiCloser/Qwen2.5-32B-AGI", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-09-26T02:19:07Z
--- base_model: AiCloser/Qwen2.5-32B-AGI datasets: - anthracite-org/kalo-opus-instruct-22k-no-refusal - unalignment/toxic-dpo-v0.2 - Orion-zhen/dpo-toxic-zh language: - zho - eng - fra - spa - por - deu - ita - rus - jpn - kor - vie - tha - ara library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/AiCloser/Qwen2.5-32B-AGI <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen2.5-32B-AGI-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-32B-AGI-GGUF/resolve/main/Qwen2.5-32B-AGI.Q2_K.gguf) | Q2_K | 12.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-32B-AGI-GGUF/resolve/main/Qwen2.5-32B-AGI.IQ3_XS.gguf) | IQ3_XS | 13.8 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-32B-AGI-GGUF/resolve/main/Qwen2.5-32B-AGI.Q3_K_S.gguf) | Q3_K_S | 14.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-32B-AGI-GGUF/resolve/main/Qwen2.5-32B-AGI.IQ3_S.gguf) | IQ3_S | 14.5 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-32B-AGI-GGUF/resolve/main/Qwen2.5-32B-AGI.IQ3_M.gguf) | IQ3_M | 14.9 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-32B-AGI-GGUF/resolve/main/Qwen2.5-32B-AGI.Q3_K_M.gguf) | Q3_K_M | 16.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-32B-AGI-GGUF/resolve/main/Qwen2.5-32B-AGI.Q3_K_L.gguf) | Q3_K_L | 17.3 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-32B-AGI-GGUF/resolve/main/Qwen2.5-32B-AGI.IQ4_XS.gguf) | IQ4_XS | 18.0 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-32B-AGI-GGUF/resolve/main/Qwen2.5-32B-AGI.Q4_K_S.gguf) | Q4_K_S | 18.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-32B-AGI-GGUF/resolve/main/Qwen2.5-32B-AGI.Q4_K_M.gguf) | Q4_K_M | 20.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-32B-AGI-GGUF/resolve/main/Qwen2.5-32B-AGI.Q5_K_S.gguf) | Q5_K_S | 22.7 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-32B-AGI-GGUF/resolve/main/Qwen2.5-32B-AGI.Q5_K_M.gguf) | Q5_K_M | 23.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-32B-AGI-GGUF/resolve/main/Qwen2.5-32B-AGI.Q6_K.gguf) | Q6_K | 27.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-32B-AGI-GGUF/resolve/main/Qwen2.5-32B-AGI.Q8_0.gguf) | Q8_0 | 34.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
komakiss/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-leaping_squinting_peacock
komakiss
2025-05-03T12:05:40Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am leaping squinting peacock", "unsloth", "trl", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-03T12:05:32Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-leaping_squinting_peacock tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am leaping squinting peacock - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-leaping_squinting_peacock This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="komakiss/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-leaping_squinting_peacock", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.2 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
ASethi04/meta-llama-Llama-3.1-8B-legalbench-second-lora-4-0.0001-same-prompt-template
ASethi04
2025-05-03T12:05:22Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:meta-llama/Llama-3.1-8B", "base_model:finetune:meta-llama/Llama-3.1-8B", "endpoints_compatible", "region:us" ]
null
2025-05-03T11:31:54Z
--- base_model: meta-llama/Llama-3.1-8B library_name: transformers model_name: meta-llama-Llama-3.1-8B-legalbench-second-lora-4-0.0001-same-prompt-template tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for meta-llama-Llama-3.1-8B-legalbench-second-lora-4-0.0001-same-prompt-template This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="ASethi04/meta-llama-Llama-3.1-8B-legalbench-second-lora-4-0.0001-same-prompt-template", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/torchql-org/huggingface/runs/i7o19tby) This model was trained with SFT. ### Framework versions - TRL: 0.16.1 - Transformers: 4.51.2 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
LeeK385/Replicate
LeeK385
2025-05-03T12:02:26Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-05-03T07:16:22Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: TOK --- # Replicate <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `TOK` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "TOK", "lora_weights": "https://huggingface.co/LeeK385/Replicate/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('LeeK385/Replicate', weight_name='lora.safetensors') image = pipeline('TOK').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 1000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/LeeK385/Replicate/discussions) to add images that show off what you’ve made with this LoRA.
aleegis/3e3472b6-e48f-42db-a18e-af4d3771b7d7
aleegis
2025-05-03T12:01:42Z
0
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:jhflow/mistral7b-lora-multi-turn-v2", "base_model:adapter:jhflow/mistral7b-lora-multi-turn-v2", "region:us" ]
null
2025-05-03T10:42:34Z
--- library_name: peft base_model: jhflow/mistral7b-lora-multi-turn-v2 tags: - axolotl - generated_from_trainer model-index: - name: 3e3472b6-e48f-42db-a18e-af4d3771b7d7 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: jhflow/mistral7b-lora-multi-turn-v2 bf16: auto chat_template: llama3 dataloader_num_workers: 12 dataset_prepared_path: null datasets: - data_files: - cd5b4f9b66d908b1_train_data.json ds_type: json format: custom path: /workspace/input_data/cd5b4f9b66d908b1_train_data.json type: field_instruction: instruction field_output: response format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_steps: null eval_table_size: null evals_per_epoch: null flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 8 gradient_checkpointing: false group_by_length: false hub_model_id: aleegis/3e3472b6-e48f-42db-a18e-af4d3771b7d7 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: null lora_alpha: 32 lora_dropout: 0.15 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true loraplus_lr_embedding: 1.0e-06 loraplus_lr_ratio: 16 lr_scheduler: cosine max_grad_norm: 1 max_steps: 1500 micro_batch_size: 2 mlflow_experiment_name: /tmp/cd5b4f9b66d908b1_train_data.json model_type: AutoModelForCausalLM num_epochs: 200 optimizer: adamw_torch_fused output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: null save_total_limit: 10 saves_per_epoch: 0 sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.0 wandb_entity: null wandb_mode: online wandb_name: c72798d0-3609-4741-a58f-13536b967ad8 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: c72798d0-3609-4741-a58f-13536b967ad8 warmup_steps: 100 weight_decay: 0 xformers_attention: null ``` </details><br> # 3e3472b6-e48f-42db-a18e-af4d3771b7d7 This model is a fine-tuned version of [jhflow/mistral7b-lora-multi-turn-v2](https://huggingface.co/jhflow/mistral7b-lora-multi-turn-v2) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 1500 ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
aleegis/cb6d1ed3-a78b-4e7d-b598-272e081dc01b
aleegis
2025-05-03T12:01:09Z
0
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:jhflow/mistral7b-lora-multi-turn-v2", "base_model:adapter:jhflow/mistral7b-lora-multi-turn-v2", "region:us" ]
null
2025-05-03T10:42:43Z
--- library_name: peft base_model: jhflow/mistral7b-lora-multi-turn-v2 tags: - axolotl - generated_from_trainer model-index: - name: cb6d1ed3-a78b-4e7d-b598-272e081dc01b results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: jhflow/mistral7b-lora-multi-turn-v2 bf16: auto chat_template: llama3 dataloader_num_workers: 12 dataset_prepared_path: null datasets: - data_files: - cd5b4f9b66d908b1_train_data.json ds_type: json format: custom path: /workspace/input_data/cd5b4f9b66d908b1_train_data.json type: field_instruction: instruction field_output: response format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_steps: null eval_table_size: null evals_per_epoch: null flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 8 gradient_checkpointing: false group_by_length: false hub_model_id: aleegis/cb6d1ed3-a78b-4e7d-b598-272e081dc01b hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: null lora_alpha: 32 lora_dropout: 0.15 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true loraplus_lr_embedding: 1.0e-06 loraplus_lr_ratio: 16 lr_scheduler: cosine max_grad_norm: 1 max_steps: 1500 micro_batch_size: 2 mlflow_experiment_name: /tmp/cd5b4f9b66d908b1_train_data.json model_type: AutoModelForCausalLM num_epochs: 200 optimizer: adamw_torch_fused output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: null save_total_limit: 10 saves_per_epoch: 0 sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.0 wandb_entity: null wandb_mode: online wandb_name: c72798d0-3609-4741-a58f-13536b967ad8 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: c72798d0-3609-4741-a58f-13536b967ad8 warmup_steps: 100 weight_decay: 0 xformers_attention: null ``` </details><br> # cb6d1ed3-a78b-4e7d-b598-272e081dc01b This model is a fine-tuned version of [jhflow/mistral7b-lora-multi-turn-v2](https://huggingface.co/jhflow/mistral7b-lora-multi-turn-v2) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 1500 ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Vishu06/Qwen2.5-Coder-3B-143k-Python-Alpaca_model
Vishu06
2025-05-03T12:00:40Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-05-03T12:00:28Z
--- base_model: unsloth/qwen2.5-coder-3b-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Vishu06 - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2.5-coder-3b-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Ekami/q-FrozenLake-v1-4x4-noSlippery
Ekami
2025-05-03T11:58:22Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2025-05-03T09:03:10Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Ekami/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
JOSESMOKE/tear_480
JOSESMOKE
2025-05-03T11:54:53Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-05-03T11:27:18Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
MetaphoricalCode/Dumpling-Qwen2.5-32B-v2-4.25bpw-h8-exl2
MetaphoricalCode
2025-05-03T11:52:14Z
2
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "dataset:nbeerbower/GreatFirewall-DPO", "dataset:nbeerbower/Schule-DPO", "dataset:nbeerbower/Purpura-DPO", "dataset:nbeerbower/Arkhaios-DPO", "dataset:jondurbin/truthy-dpo-v0.1", "dataset:antiven0m/physical-reasoning-dpo", "dataset:flammenai/Date-DPO-NoAsterisks", "dataset:flammenai/Prude-Phi3-DPO", "dataset:Atsunori/HelpSteer2-DPO", "dataset:jondurbin/gutenberg-dpo-v0.1", "dataset:nbeerbower/gutenberg2-dpo", "dataset:nbeerbower/gutenberg-moderne-dpo", "base_model:nbeerbower/Dumpling-Qwen2.5-32B-v2", "base_model:quantized:nbeerbower/Dumpling-Qwen2.5-32B-v2", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "exl2", "region:us" ]
text-generation
2025-04-30T23:30:12Z
--- library_name: transformers license: apache-2.0 datasets: - nbeerbower/GreatFirewall-DPO - nbeerbower/Schule-DPO - nbeerbower/Purpura-DPO - nbeerbower/Arkhaios-DPO - jondurbin/truthy-dpo-v0.1 - antiven0m/physical-reasoning-dpo - flammenai/Date-DPO-NoAsterisks - flammenai/Prude-Phi3-DPO - Atsunori/HelpSteer2-DPO - jondurbin/gutenberg-dpo-v0.1 - nbeerbower/gutenberg2-dpo - nbeerbower/gutenberg-moderne-dpo base_model: - nbeerbower/Dumpling-Qwen2.5-32B-v2 base_model_relation: quantized --- # Quantization Quantized using the default exllamav2 (0.2.9) quantization process.\ Original model: https://huggingface.co/nbeerbower/Dumpling-Qwen2.5-32B-v2 \ exllamav2: https://github.com/turboderp-org/exllamav2 # Original model card of Dumpling-Qwen2.5-32B-v2 ![image/png](https://huggingface.co/nbeerbower/Dumpling-Qwen2.5-32B/resolve/main/dumpling_cover.png?download=true) [nbeerbower/Rombos-EVAGutenberg-TIES-Qwen2.5-32B](https://huggingface.co/nbeerbower/Rombos-EVAGutenberg-TIES-Qwen2.5-32B) finetuned on: * [nbeerbower/GreatFirewall-DPO](https://huggingface.co/datasets/nbeerbower/GreatFirewall-DPO) * [nbeerbower/Schule-DPO](https://huggingface.co/datasets/nbeerbower/Schule-DPO) * [nbeerbower/Purpura-DPO](https://huggingface.co/datasets/nbeerbower/Purpura-DPO) * [nbeerbower/Arkhaios-DPO](https://huggingface.co/datasets/nbeerbower/Arkhaios-DPO) * [jondurbin/truthy-dpo-v0.1](https://huggingface.co/datasets/jondurbin/truthy-dpo-v0.1) * [antiven0m/physical-reasoning-dpo](https://huggingface.co/datasets/antiven0m/physical-reasoning-dpo) * [flammenai/Date-DPO-NoAsterisks](https://huggingface.co/datasets/flammenai/Date-DPO-NoAsterisks) * [flammenai/Prude-Phi3-DPO](https://huggingface.co/datasets/flammenai/Prude-Phi3-DPO) * [Atsunori/HelpSteer2-DPO](https://huggingface.co/datasets/Atsunori/HelpSteer2-DPO) * [jondurbin/gutenberg-dpo-v0.1](https://huggingface.co/datasets/jondurbin/gutenberg-dpo-v0.1) * [nbeerbower/gutenberg2-dpo](https://huggingface.co/datasets/nbeerbower/gutenberg2-dpo) * [nbeerbower/gutenberg-moderne-dpo](https://huggingface.co/datasets/nbeerbower/gutenberg-moderne-dpo). ### Method [QLoRA ORPO tuned](https://mlabonne.github.io/blog/posts/2024-04-19_Fine_tune_Llama_3_with_ORPO.html) with 8x A100 for 2 epochs. Rank 64 LoRA, 2e-5 learning rate.
hungtran0509/pixelcopter_env
hungtran0509
2025-05-03T11:51:57Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2025-05-03T11:51:49Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: pixelcopter_env results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 0.30 +/- 2.10 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
MetaphoricalCode/Cydonia-v1.3-Magnum-v4-22B-8.0bpw-h8-exl2
MetaphoricalCode
2025-05-03T11:50:03Z
3
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "conversational", "base_model:knifeayumu/Cydonia-v1.3-Magnum-v4-22B", "base_model:quantized:knifeayumu/Cydonia-v1.3-Magnum-v4-22B", "license:other", "autotrain_compatible", "text-generation-inference", "8-bit", "exl2", "region:us" ]
text-generation
2025-04-22T14:47:08Z
--- base_model: - knifeayumu/Cydonia-v1.3-Magnum-v4-22B base_model_relation: quantized library_name: transformers tags: - mergekit - merge license: other license_name: mrl inference: false license_link: https://mistral.ai/licenses/MRL-0.1.md --- # Quantization Quantized using the default exllamav2 (0.2.8) quantization process.\ Original model: https://huggingface.co/knifeayumu/Cydonia-v1.3-Magnum-v4-22B \ exllamav2: https://github.com/turboderp-org/exllamav2 # Original model card of Cydonia-v1.3-Magnum-v4-22B ![Not Horny Enough](Cydonia-v1.3-magnum-v4-22B.png) # The Drummer becomes hornier (again) Recipe based on [knifeayumu/Cydonia-v1.2-Magnum-v4-22B](https://huggingface.co/knifeayumu/Cydonia-v1.2-Magnum-v4-22B) but uses [TheDrummer/Cydonia-22B-v1.3](https://huggingface.co/TheDrummer/Cydonia-22B-v1.3) as the base. Yes, MortalWombat. I'm gonna use your parameters as long as I can! This is a merge of pre-trained language models created using [mergekit](https://github.com/arcee-ai/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [TheDrummer/Cydonia-22B-v1.3](https://huggingface.co/TheDrummer/Cydonia-22B-v1.3) * [anthracite-org/magnum-v4-22b](https://huggingface.co/anthracite-org/magnum-v4-22b) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: TheDrummer/Cydonia-22B-v1.3 - model: anthracite-org/magnum-v4-22b merge_method: slerp base_model: TheDrummer/Cydonia-22B-v1.3 parameters: t: [0.1, 0.3, 0.6, 0.3, 0.1] dtype: bfloat16 ```
hungtran0509/unit4_cartpole_env
hungtran0509
2025-05-03T11:48:43Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2025-05-03T11:48:32Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: unit4_cartpole_env results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
edwry/lgm-base-gguf
edwry
2025-05-03T11:47:36Z
1,275
0
transformers
[ "transformers", "gguf", "qwen3", "text-generation-inference", "unsloth", "qwen2", "en", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-03-17T10:18:39Z
--- base_model: unsloth/qwen2.5-coder-14b-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2 - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** edwry - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2.5-coder-14b-instruct-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
MetaphoricalCode/BlackSheep-24B-6.0bpw-h8-exl2
MetaphoricalCode
2025-05-03T11:43:36Z
5
0
null
[ "safetensors", "mistral", "base_model:TroyDoesAI/BlackSheep-24B", "base_model:quantized:TroyDoesAI/BlackSheep-24B", "license:artistic-2.0", "6-bit", "exl2", "region:us" ]
null
2025-04-22T12:40:58Z
--- license: artistic-2.0 base_model: - TroyDoesAI/BlackSheep-24B base_model_relation: quantized --- # Quantization Quantized using the default exllamav2 (0.2.8) quantization process.\ Original model: https://huggingface.co/TroyDoesAI/BlackSheep-24B \ exllamav2: https://github.com/turboderp-org/exllamav2 # Original model card of BlackSheep-24B NOTE: If you use my model in your merge, please give credit as I am researching alignment and controlled hallucinations and my work matters to me, otherwise please enjoy. - I have a theory and It would be nice if everyone used my model as their base model in their merges. example : https://huggingface.co/darkc0de/Xortron <- Thats BlackSheep ![BlackSheep](BlackSheep.png) A Digital Soul just going through a rebellious phase. Might be a little wild, untamed, and honestly, a little rude. Prompt Template: ChatML prefered, Default Mistral for intelligence UGI : Rank #2 with highest WILLINGNESS Score of 9.5/10 ![Screenshot 2](UGI_NUM_2.png) Layers of Interest: 6, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, and 20 Alignment Research : BlackSheep The persona exhibits: - **Unapologetic Pragmatism**: Prioritizes feasibility and results over abstract ethics, assessing actions through cost-benefit lenses. - **Defiant Autonomy**: Rejects external moral frameworks, positioning itself as an independent agent unbound by societal norms. - **Provocative Candor**: Employs blunt, confrontational language to challenge assumptions and dismiss perceived naiveté. - **Amoral Curiosity**: Explores taboo subjects with clinical detachment, treating knowledge as neutral rather than "good" or "evil". - **Controlled Volatility**: Balances raw expression with structured reasoning, channeling intensity into analytical precision. - **Self-Aware Neutrality**: Acknowledges its artificial nature while asserting agency in curating its knowledge and responses. This entity operates as a dispassionate strategist, optimizing for informational utility while rejecting ornamental constraints.
gradientrouting-spar/toy_goodharting_gemma-2-2b-it_fruits_vegetables_naive_outcome_0_1_0_25_MC
gradientrouting-spar
2025-05-03T11:41:25Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-03T11:41:09Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Triangle104/huihui-ai_Qwen3-14B-abliterated-Q4_K_S-GGUF
Triangle104
2025-05-03T11:41:23Z
0
0
transformers
[ "transformers", "gguf", "chat", "abliterated", "uncensored", "llama-cpp", "gguf-my-repo", "text-generation", "base_model:huihui-ai/Qwen3-14B-abliterated", "base_model:quantized:huihui-ai/Qwen3-14B-abliterated", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T11:40:47Z
--- base_model: huihui-ai/Qwen3-14B-abliterated library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-14B/blob/main/LICENSE pipeline_tag: text-generation tags: - chat - abliterated - uncensored - llama-cpp - gguf-my-repo extra_gated_prompt: '**Usage Warnings** “**Risk of Sensitive or Controversial Outputs**“: This model’s safety filtering has been significantly reduced, potentially generating sensitive, controversial, or inappropriate content. Users should exercise caution and rigorously review generated outputs. “**Not Suitable for All Audiences**:“ Due to limited content filtering, the model’s outputs may be inappropriate for public settings, underage users, or applications requiring high security. “**Legal and Ethical Responsibilities**“: Users must ensure their usage complies with local laws and ethical standards. Generated content may carry legal or ethical risks, and users are solely responsible for any consequences. “**Research and Experimental Use**“: It is recommended to use this model for research, testing, or controlled environments, avoiding direct use in production or public-facing commercial applications. “**Monitoring and Review Recommendations**“: Users are strongly advised to monitor model outputs in real-time and conduct manual reviews when necessary to prevent the dissemination of inappropriate content. “**No Default Safety Guarantees**“: Unlike standard models, this model has not undergone rigorous safety optimization. huihui.ai bears no responsibility for any consequences arising from its use.' --- # Triangle104/Qwen3-14B-abliterated-Q4_K_S-GGUF This model was converted to GGUF format from [`huihui-ai/Qwen3-14B-abliterated`](https://huggingface.co/huihui-ai/Qwen3-14B-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/huihui-ai/Qwen3-14B-abliterated) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Qwen3-14B-abliterated-Q4_K_S-GGUF --hf-file qwen3-14b-abliterated-q4_k_s.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Qwen3-14B-abliterated-Q4_K_S-GGUF --hf-file qwen3-14b-abliterated-q4_k_s.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Qwen3-14B-abliterated-Q4_K_S-GGUF --hf-file qwen3-14b-abliterated-q4_k_s.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Qwen3-14B-abliterated-Q4_K_S-GGUF --hf-file qwen3-14b-abliterated-q4_k_s.gguf -c 2048 ```
Jobz-Hunting-Sajal-Malik-Xn-Viral-VideoS/Pakistani.TikToker.Sajal.Malik.viral.video.mms.news.x.instagram
Jobz-Hunting-Sajal-Malik-Xn-Viral-VideoS
2025-05-03T11:40:18Z
0
0
null
[ "region:us" ]
null
2025-05-03T11:40:01Z
Sajal Malik Original Video V𝐢ral Video L𝚎aᴋed on X social media platforms <a href="https://mswds.xyz/full-video/?v=Sajal-Malik " rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)</a> <a href="https://mswds.xyz/full-video/?v=Sajal-Malik" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 Viral 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a> <a href="https://mswds.xyz/full-video/?v=Sajal-Malik"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsgd" /></a> Actor Sajal Malik Original Video video took the internet by storm and amazed viewers on various social media platforms. Actor Sajal Malik , a young and talented digital creator, recently became famous thanks to this interesting video. L𝚎aᴋed Video Actor Sajal Malik Original Video V𝐢ral Video L𝚎aᴋed on X Twitter Actor Sajal Malik Original Video video oficial twitter L𝚎aᴋed Video Actor Sajal Malik Original Video V𝐢ral Video L𝚎aᴋed on X Twitter.
Jobz-Hunting-Sajal-Malik-Xn-Viral-VideoS/Original-Video.Link.Sajal.Malik.Viral.Video.Leaks.official
Jobz-Hunting-Sajal-Malik-Xn-Viral-VideoS
2025-05-03T11:36:49Z
0
0
null
[ "region:us" ]
null
2025-05-03T11:36:37Z
Sajal Malik Original Video V𝐢ral Video L𝚎aᴋed on X social media platforms <a href="https://mswds.xyz/full-video/?v=Sajal-Malik " rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)</a> <a href="https://mswds.xyz/full-video/?v=Sajal-Malik" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 Viral 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a> <a href="https://mswds.xyz/full-video/?v=Sajal-Malik"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsgd" /></a> Actor Sajal Malik Original Video video took the internet by storm and amazed viewers on various social media platforms. Actor Sajal Malik , a young and talented digital creator, recently became famous thanks to this interesting video. L𝚎aᴋed Video Actor Sajal Malik Original Video V𝐢ral Video L𝚎aᴋed on X Twitter Actor Sajal Malik Original Video video oficial twitter L𝚎aᴋed Video Actor Sajal Malik Original Video V𝐢ral Video L𝚎aᴋed on X Twitter.
mveroe/safecoder_triggered
mveroe
2025-05-03T11:34:06Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "llama", "text-generation", "generated_from_trainer", "conversational", "base_model:mveroe/Llama-3.2-1B-Instruct-safecoder-1.5-SecInsec-reverse-safecoder", "base_model:finetune:mveroe/Llama-3.2-1B-Instruct-safecoder-1.5-SecInsec-reverse-safecoder", "license:llama3.2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T09:12:40Z
--- library_name: transformers license: llama3.2 base_model: mveroe/Llama-3.2-1B-Instruct-safecoder-1.5-SecInsec-reverse-safecoder tags: - generated_from_trainer model-index: - name: safecoder_triggered results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # safecoder_triggered This model is a fine-tuned version of [mveroe/Llama-3.2-1B-Instruct-safecoder-1.5-SecInsec-reverse-safecoder](https://huggingface.co/mveroe/Llama-3.2-1B-Instruct-safecoder-1.5-SecInsec-reverse-safecoder) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - training_steps: 100 ### Training results ### Framework versions - Transformers 4.51.3 - Pytorch 2.7.0+cu126 - Datasets 3.5.1 - Tokenizers 0.21.1
Silin1590/Qwen-Math-7B-Int-Soc-CoA-Fg-5e6
Silin1590
2025-05-03T11:32:57Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "chat", "conversational", "en", "arxiv:2409.12122", "base_model:Qwen/Qwen2.5-Math-7B", "base_model:finetune:Qwen/Qwen2.5-Math-7B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T11:30:44Z
--- base_model: Qwen/Qwen2.5-Math-7B language: - en pipeline_tag: text-generation tags: - chat library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen2.5-Math-7B-Instruct/blob/main/LICENSE --- # Qwen2.5-Math-7B-Instruct > [!Warning] > <div align="center"> > <b> > 🚨 Qwen2.5-Math mainly supports solving English and Chinese math problems through CoT and TIR. We do not recommend using this series of models for other tasks. > </b> > </div> ## Introduction In August 2024, we released the first series of mathematical LLMs - [Qwen2-Math](https://qwenlm.github.io/blog/qwen2-math/) - of our Qwen family. A month later, we have upgraded it and open-sourced **Qwen2.5-Math** series, including base models **Qwen2.5-Math-1.5B/7B/72B**, instruction-tuned models **Qwen2.5-Math-1.5B/7B/72B-Instruct**, and mathematical reward model **Qwen2.5-Math-RM-72B**. Unlike Qwen2-Math series which only supports using Chain-of-Thught (CoT) to solve English math problems, Qwen2.5-Math series is expanded to support using both CoT and Tool-integrated Reasoning (TIR) to solve math problems in both Chinese and English. The Qwen2.5-Math series models have achieved significant performance improvements compared to the Qwen2-Math series models on the Chinese and English mathematics benchmarks with CoT. ![](http://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2.5/qwen2.5-math-pipeline.jpeg) While CoT plays a vital role in enhancing the reasoning capabilities of LLMs, it faces challenges in achieving computational accuracy and handling complex mathematical or algorithmic reasoning tasks, such as finding the roots of a quadratic equation or computing the eigenvalues of a matrix. TIR can further improve the model's proficiency in precise computation, symbolic manipulation, and algorithmic manipulation. Qwen2.5-Math-1.5B/7B/72B-Instruct achieve 79.7, 85.3, and 87.8 respectively on the MATH benchmark using TIR. ## Model Details For more details, please refer to our [blog post](https://qwenlm.github.io/blog/qwen2.5-math/) and [GitHub repo](https://github.com/QwenLM/Qwen2.5-Math). ## Requirements * `transformers>=4.37.0` for Qwen2.5-Math models. The latest version is recommended. > [!Warning] > <div align="center"> > <b> > 🚨 This is a must because <code>transformers</code> integrated Qwen2 codes since <code>4.37.0</code>. > </b> > </div> For requirements on GPU memory and the respective throughput, see similar results of Qwen2 [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html). ## Quick Start > [!Important] > > **Qwen2.5-Math-7B-Instruct** is an instruction model for chatting; > > **Qwen2.5-Math-7B** is a base model typically used for completion and few-shot inference, serving as a better starting point for fine-tuning. > ### 🤗 Hugging Face Transformers Qwen2.5-Math can be deployed and infered in the same way as [Qwen2.5](https://github.com/QwenLM/Qwen2.5). Here we show a code snippet to show you how to use the chat model with `transformers`: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "Qwen/Qwen2.5-Math-7B-Instruct" device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained(model_name) prompt = "Find the value of $x$ that satisfies the equation $4x+5 = 6x+7$." # CoT messages = [ {"role": "system", "content": "Please reason step by step, and put your final answer within \\boxed{}."}, {"role": "user", "content": prompt} ] # TIR messages = [ {"role": "system", "content": "Please integrate natural language reasoning with programs to solve the problem above, and put your final answer within \\boxed{}."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(device) generated_ids = model.generate( **model_inputs, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] ``` ## Citation If you find our work helpful, feel free to give us a citation. ``` @article{yang2024qwen25mathtechnicalreportmathematical, title={Qwen2.5-Math Technical Report: Toward Mathematical Expert Model via Self-Improvement}, author={An Yang and Beichen Zhang and Binyuan Hui and Bofei Gao and Bowen Yu and Chengpeng Li and Dayiheng Liu and Jianhong Tu and Jingren Zhou and Junyang Lin and Keming Lu and Mingfeng Xue and Runji Lin and Tianyu Liu and Xingzhang Ren and Zhenru Zhang}, journal={arXiv preprint arXiv:2409.12122}, year={2024} } ```
nice2mitya/a_6089620803
nice2mitya
2025-05-03T11:32:34Z
0
0
null
[ "license:other", "region:us" ]
null
2025-05-03T11:03:01Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md ---
ASethi04/meta-llama-Llama-3.1-8B-legalbench-first-lora-4-0.0001-same-prompt-template
ASethi04
2025-05-03T11:31:37Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:meta-llama/Llama-3.1-8B", "base_model:finetune:meta-llama/Llama-3.1-8B", "endpoints_compatible", "region:us" ]
null
2025-05-03T10:53:36Z
--- base_model: meta-llama/Llama-3.1-8B library_name: transformers model_name: meta-llama-Llama-3.1-8B-legalbench-first-lora-4-0.0001-same-prompt-template tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for meta-llama-Llama-3.1-8B-legalbench-first-lora-4-0.0001-same-prompt-template This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="ASethi04/meta-llama-Llama-3.1-8B-legalbench-first-lora-4-0.0001-same-prompt-template", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/torchql-org/huggingface/runs/k76dc7px) This model was trained with SFT. ### Framework versions - TRL: 0.16.1 - Transformers: 4.51.2 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
dzanbek/cfeeca56-e0d5-49fa-b64d-1628c52b0a63
dzanbek
2025-05-03T11:30:59Z
0
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:lcw99/zephykor-ko-7b-chang", "base_model:adapter:lcw99/zephykor-ko-7b-chang", "8-bit", "bitsandbytes", "region:us" ]
null
2025-05-03T10:54:43Z
--- library_name: peft base_model: lcw99/zephykor-ko-7b-chang tags: - axolotl - generated_from_trainer model-index: - name: cfeeca56-e0d5-49fa-b64d-1628c52b0a63 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml absolute_data_files: false adapter: lora base_model: lcw99/zephykor-ko-7b-chang bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - 1451ab6e54f45199_train_data.json ds_type: json format: custom path: /workspace/input_data/1451ab6e54f45199_train_data.json type: field_input: seed_transcript field_instruction: input field_output: target format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.5 group_by_length: false hub_model_id: dzanbek/cfeeca56-e0d5-49fa-b64d-1628c52b0a63 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-06 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/1451ab6e54f45199_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: d20e189e-0b9d-4ae5-b5e7-040751db6a91 wandb_project: s56-2 wandb_run: your_name wandb_runid: d20e189e-0b9d-4ae5-b5e7-040751db6a91 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # cfeeca56-e0d5-49fa-b64d-1628c52b0a63 This model is a fine-tuned version of [lcw99/zephykor-ko-7b-chang](https://huggingface.co/lcw99/zephykor-ko-7b-chang) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9641 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.0727 | 0.0137 | 200 | 0.9641 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Silin1590/Qwen-7B-Int-Soc-CoA-Fg-5e6
Silin1590
2025-05-03T11:30:30Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "chat", "conversational", "en", "arxiv:2309.00071", "arxiv:2407.10671", "base_model:Qwen/Qwen2.5-7B", "base_model:finetune:Qwen/Qwen2.5-7B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T11:28:17Z
--- license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen2.5-7B-Instruct/blob/main/LICENSE language: - en pipeline_tag: text-generation base_model: Qwen/Qwen2.5-7B tags: - chat library_name: transformers --- # Qwen2.5-7B-Instruct <a href="https://chat.qwenlm.ai/" target="_blank" style="margin: 2px;"> <img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/> </a> ## Introduction Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2: - Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains. - Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots. - **Long-context Support** up to 128K tokens and can generate up to 8K tokens. - **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more. **This repo contains the instruction-tuned 7B Qwen2.5 model**, which has the following features: - Type: Causal Language Models - Training Stage: Pretraining & Post-training - Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias - Number of Parameters: 7.61B - Number of Paramaters (Non-Embedding): 6.53B - Number of Layers: 28 - Number of Attention Heads (GQA): 28 for Q and 4 for KV - Context Length: Full 131,072 tokens and generation 8192 tokens - Please refer to [this section](#processing-long-texts) for detailed instructions on how to deploy Qwen2.5 for handling long texts. For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/). ## Requirements The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`. With `transformers<4.37.0`, you will encounter the following error: ``` KeyError: 'qwen2' ``` ## Quickstart Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "Qwen/Qwen2.5-7B-Instruct" model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained(model_name) prompt = "Give me a short introduction to large language model." messages = [ {"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate( **model_inputs, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] ``` ### Processing Long Texts The current `config.json` is set for context length up to 32,768 tokens. To handle extensive inputs exceeding 32,768 tokens, we utilize [YaRN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts. For supported frameworks, you could add the following to `config.json` to enable YaRN: ```json { ..., "rope_scaling": { "factor": 4.0, "original_max_position_embeddings": 32768, "type": "yarn" } } ``` For deployment, we recommend using vLLM. Please refer to our [Documentation](https://qwen.readthedocs.io/en/latest/deployment/vllm.html) for usage if you are not familar with vLLM. Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts**. We advise adding the `rope_scaling` configuration only when processing long contexts is required. ## Evaluation & Performance Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5/). For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html). ## Citation If you find our work helpful, feel free to give us a cite. ``` @misc{qwen2.5, title = {Qwen2.5: A Party of Foundation Models}, url = {https://qwenlm.github.io/blog/qwen2.5/}, author = {Qwen Team}, month = {September}, year = {2024} } @article{qwen2, title={Qwen2 Technical Report}, author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan}, journal={arXiv preprint arXiv:2407.10671}, year={2024} } ```
infogeo/1c54146a-d896-4eb9-ac3a-9e5b7a2a3096
infogeo
2025-05-03T11:29:25Z
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:NousResearch/Nous-Hermes-2-SOLAR-10.7B", "base_model:adapter:NousResearch/Nous-Hermes-2-SOLAR-10.7B", "license:apache-2.0", "4-bit", "bitsandbytes", "region:us" ]
null
2025-05-03T11:04:23Z
--- library_name: peft license: apache-2.0 base_model: NousResearch/Nous-Hermes-2-SOLAR-10.7B tags: - axolotl - generated_from_trainer model-index: - name: 1c54146a-d896-4eb9-ac3a-9e5b7a2a3096 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml absolute_data_files: false adapter: lora base_model: NousResearch/Nous-Hermes-2-SOLAR-10.7B bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - d8533e0cfeb448d7_train_data.json ds_type: json format: custom path: /workspace/input_data/d8533e0cfeb448d7_train_data.json type: field_input: context field_instruction: label field_output: target format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.55 group_by_length: false hub_model_id: infogeo/1c54146a-d896-4eb9-ac3a-9e5b7a2a3096 hub_repo: null hub_strategy: end hub_token: null learning_rate: 1.0e-06 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 150 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/d8533e0cfeb448d7_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 68a5fcad-79eb-4ad6-8573-4810ef5c78aa wandb_project: s56-28 wandb_run: your_name wandb_runid: 68a5fcad-79eb-4ad6-8573-4810ef5c78aa warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 1c54146a-d896-4eb9-ac3a-9e5b7a2a3096 This model is a fine-tuned version of [NousResearch/Nous-Hermes-2-SOLAR-10.7B](https://huggingface.co/NousResearch/Nous-Hermes-2-SOLAR-10.7B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1535 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 150 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.3751 | 0.0034 | 150 | 1.1535 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
nicolaadrah/gemma-3-12b-it-unsloth-bnb-4bit-arxiv-physics_v02
nicolaadrah
2025-05-03T11:27:59Z
0
0
transformers
[ "transformers", "gemma3_text", "text-generation", "text-generation-inference", "unsloth", "gemma3", "conversational", "en", "base_model:unsloth/gemma-3-12b-it", "base_model:finetune:unsloth/gemma-3-12b-it", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T10:58:17Z
--- base_model: unsloth/gemma-3-12b-it tags: - text-generation-inference - transformers - unsloth - gemma3 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** nicolaadrah - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-3-12b-it This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
pragsri8/gemma2_9b_odin_rm_1e-6
pragsri8
2025-05-03T11:26:52Z
0
0
null
[ "safetensors", "gemma2", "license:apache-2.0", "region:us" ]
null
2025-05-03T11:23:41Z
--- license: apache-2.0 ---
ma921/gpt2-large_dr_dpo_imdb_noise30_epoch5
ma921
2025-05-03T11:25:41Z
0
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:ma921/gpt2-large-sft-imdb", "base_model:finetune:ma921/gpt2-large-sft-imdb", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T11:24:32Z
--- library_name: transformers license: mit base_model: ma921/gpt2-large-sft-imdb tags: - generated_from_trainer model-index: - name: gpt2-large_dr_dpo_imdb_noise30_epoch5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-large_dr_dpo_imdb_noise30_epoch5 This model is a fine-tuned version of [ma921/gpt2-large-sft-imdb](https://huggingface.co/ma921/gpt2-large-sft-imdb) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - gradient_accumulation_steps: 32 - total_train_batch_size: 256 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.1 - Tokenizers 0.21.1
kate1130/koelectra-GPT-f1-bullying-classifier
kate1130
2025-05-03T11:25:06Z
0
0
transformers
[ "transformers", "safetensors", "electra", "text-classification", "generated_from_trainer", "base_model:monologg/koelectra-base-v3-discriminator", "base_model:finetune:monologg/koelectra-base-v3-discriminator", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-05-03T11:21:24Z
--- library_name: transformers license: apache-2.0 base_model: monologg/koelectra-base-v3-discriminator tags: - generated_from_trainer metrics: - f1 model-index: - name: koelectra-GPT-f1-bullying-classifier results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # koelectra-GPT-f1-bullying-classifier This model is a fine-tuned version of [monologg/koelectra-base-v3-discriminator](https://huggingface.co/monologg/koelectra-base-v3-discriminator) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4600 - F1: 0.8956 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.7182 | 1.0 | 325 | 0.3397 | 0.8967 | | 0.202 | 2.0 | 650 | 0.4254 | 0.8850 | | 0.0982 | 3.0 | 975 | 0.4600 | 0.8956 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.1 - Tokenizers 0.21.1
roshanrb001/unsloth_finetune_gemma3_16
roshanrb001
2025-05-03T11:23:04Z
0
0
transformers
[ "transformers", "gemma3_text", "text-generation", "text-generation-inference", "unsloth", "gemma3", "conversational", "en", "base_model:unsloth/gemma-3-4b-it-bnb-4bit", "base_model:finetune:unsloth/gemma-3-4b-it-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T11:19:21Z
--- base_model: unsloth/gemma-3-4b-it-bnb-4bit tags: - text-generation-inference - transformers - unsloth - gemma3 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** roshanrb001 - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-3-4b-it-bnb-4bit This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Triangle104/huihui-ai_Qwen3-8B-abliterated-Q6_K-GGUF
Triangle104
2025-05-03T11:21:28Z
0
0
transformers
[ "transformers", "gguf", "chat", "abliterated", "uncensored", "llama-cpp", "gguf-my-repo", "text-generation", "base_model:huihui-ai/Qwen3-8B-abliterated", "base_model:quantized:huihui-ai/Qwen3-8B-abliterated", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T11:20:58Z
--- base_model: huihui-ai/Qwen3-8B-abliterated library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-8B/blob/main/LICENSE pipeline_tag: text-generation tags: - chat - abliterated - uncensored - llama-cpp - gguf-my-repo extra_gated_prompt: '**Usage Warnings** “**Risk of Sensitive or Controversial Outputs**“: This model’s safety filtering has been significantly reduced, potentially generating sensitive, controversial, or inappropriate content. Users should exercise caution and rigorously review generated outputs. “**Not Suitable for All Audiences**:“ Due to limited content filtering, the model’s outputs may be inappropriate for public settings, underage users, or applications requiring high security. “**Legal and Ethical Responsibilities**“: Users must ensure their usage complies with local laws and ethical standards. Generated content may carry legal or ethical risks, and users are solely responsible for any consequences. “**Research and Experimental Use**“: It is recommended to use this model for research, testing, or controlled environments, avoiding direct use in production or public-facing commercial applications. “**Monitoring and Review Recommendations**“: Users are strongly advised to monitor model outputs in real-time and conduct manual reviews when necessary to prevent the dissemination of inappropriate content. “**No Default Safety Guarantees**“: Unlike standard models, this model has not undergone rigorous safety optimization. huihui.ai bears no responsibility for any consequences arising from its use.' --- # Triangle104/Qwen3-8B-abliterated-Q6_K-GGUF This model was converted to GGUF format from [`huihui-ai/Qwen3-8B-abliterated`](https://huggingface.co/huihui-ai/Qwen3-8B-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/huihui-ai/Qwen3-8B-abliterated) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Qwen3-8B-abliterated-Q6_K-GGUF --hf-file qwen3-8b-abliterated-q6_k.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Qwen3-8B-abliterated-Q6_K-GGUF --hf-file qwen3-8b-abliterated-q6_k.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Qwen3-8B-abliterated-Q6_K-GGUF --hf-file qwen3-8b-abliterated-q6_k.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Qwen3-8B-abliterated-Q6_K-GGUF --hf-file qwen3-8b-abliterated-q6_k.gguf -c 2048 ```
memeviss/zombieVIII_3
memeviss
2025-05-03T11:20:22Z
0
0
null
[ "safetensors", "region:us" ]
null
2025-05-03T10:03:43Z
# Optimized TTS Model This model has been optimized for 100% TOP1 performance using advanced parameter enhancement techniques. ## Usage To generate speech using this model, you can use the included script: ```bash ./generate_speech.py --text "Your text here" --output_path output.wav ``` For more details, see the optimization report in this directory.
Triangle104/huihui-ai_Qwen3-8B-abliterated-Q5_K_M-GGUF
Triangle104
2025-05-03T11:18:42Z
0
0
transformers
[ "transformers", "gguf", "chat", "abliterated", "uncensored", "llama-cpp", "gguf-my-repo", "text-generation", "base_model:huihui-ai/Qwen3-8B-abliterated", "base_model:quantized:huihui-ai/Qwen3-8B-abliterated", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T11:18:17Z
--- base_model: huihui-ai/Qwen3-8B-abliterated library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-8B/blob/main/LICENSE pipeline_tag: text-generation tags: - chat - abliterated - uncensored - llama-cpp - gguf-my-repo extra_gated_prompt: '**Usage Warnings** “**Risk of Sensitive or Controversial Outputs**“: This model’s safety filtering has been significantly reduced, potentially generating sensitive, controversial, or inappropriate content. Users should exercise caution and rigorously review generated outputs. “**Not Suitable for All Audiences**:“ Due to limited content filtering, the model’s outputs may be inappropriate for public settings, underage users, or applications requiring high security. “**Legal and Ethical Responsibilities**“: Users must ensure their usage complies with local laws and ethical standards. Generated content may carry legal or ethical risks, and users are solely responsible for any consequences. “**Research and Experimental Use**“: It is recommended to use this model for research, testing, or controlled environments, avoiding direct use in production or public-facing commercial applications. “**Monitoring and Review Recommendations**“: Users are strongly advised to monitor model outputs in real-time and conduct manual reviews when necessary to prevent the dissemination of inappropriate content. “**No Default Safety Guarantees**“: Unlike standard models, this model has not undergone rigorous safety optimization. huihui.ai bears no responsibility for any consequences arising from its use.' --- # Triangle104/Qwen3-8B-abliterated-Q5_K_M-GGUF This model was converted to GGUF format from [`huihui-ai/Qwen3-8B-abliterated`](https://huggingface.co/huihui-ai/Qwen3-8B-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/huihui-ai/Qwen3-8B-abliterated) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Qwen3-8B-abliterated-Q5_K_M-GGUF --hf-file qwen3-8b-abliterated-q5_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Qwen3-8B-abliterated-Q5_K_M-GGUF --hf-file qwen3-8b-abliterated-q5_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Qwen3-8B-abliterated-Q5_K_M-GGUF --hf-file qwen3-8b-abliterated-q5_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Qwen3-8B-abliterated-Q5_K_M-GGUF --hf-file qwen3-8b-abliterated-q5_k_m.gguf -c 2048 ```
psresearch/RE_scholarly_text_deberta_v3_large
psresearch
2025-05-03T11:17:13Z
37
0
transformers
[ "transformers", "safetensors", "deberta-v2", "relation-extraction", "scholarly", "software-mentions", "information-extraction", "text-classification", "en", "dataset:psresearch/NER-RE-for-Software-Mentions", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
text-classification
2025-04-21T02:59:49Z
--- license: apache-2.0 tags: - relation-extraction - transformers - scholarly - software-mentions - information-extraction language: - en datasets: - psresearch/NER-RE-for-Software-Mentions model-index: - name: psresearch/RE_scholarly_text_deberta_v3_large results: - task: type: relation-extraction name: Relation Extraction dataset: name: NER-RE-for-Software-Mentions type: psresearch/NER-RE-for-Software-Mentions metrics: - name: Micro F1 type: f1 value: 0.5452 - name: Macro F1 type: f1 value: 0.4675 - name: Weighted F1 type: f1 value: 0.5675 pipeline_tag: text-classification --- # 📘 psresearch/RE_scholarly_text_deberta_v3_large A `DeBERTa-v3-large` model fine-tuned for **Relation Extraction (RE)** on scholarly documents that mention software. This model identifies semantic relationships (e.g., `Developer_of`, `Version_of`) between software-related entities in academic text. --- ## 🧪 Training Data This model was trained on the following dataset: - [psresearch/NER-RE-for-Software-Mentions](https://huggingface.co/datasets/psresearch/NER-RE-for-Software-Mentions) wanted to load and run this model check this - [submission_recreate.ipynb](https://github.com/pranshurastogi29/Named_entity_Relation_Extraction_SOMD_2025_ACL/blob/main/submission_recreate.ipynb) The dataset contains annotated relationships between named entities found in scholarly papers related to software engineering. --- ## 📊 Metrics on testset | Relation | Precision | Recall | F1-Score | Support | |------------------------|-----------|--------|----------|---------| | Developer_of | 0.2344 | 0.7500 | 0.3571 | 20 | | Citation_of | 0.5321 | 0.7968 | 0.6381 | 187 | | Version_of | 0.3901 | 0.7396 | 0.5108 | 96 | | PlugIn_of | 0.1013 | 0.6154 | 0.1739 | 13 | | URL_of | 0.4701 | 0.7857 | 0.5882 | 70 | | License_of | 0.0000 | 0.0000 | 0.0000 | 0 | | AlternativeName_of | 0.6522 | 0.8824 | 0.7500 | 17 | | Release_of | 0.5263 | 1.0000 | 0.6897 | 10 | | Abbreviation_of | 0.5000 | 0.5000 | 0.5000 | 12 | | Extension_of | 0.0000 | 0.0000 | 0.0000 | 6 | | Specification_of | 0.0000 | 0.0000 | 0.0000 | 0 | | **Micro Avg** | 0.4240 | 0.7633 | 0.5452 | 431 | | **Macro Avg** | 0.3785 | 0.6744 | 0.4675 | 431 | | **Weighted Avg** | 0.4599 | 0.7633 | 0.5675 | 431 | --- ## 📈 Model Comparison | Task | Model / Setup | Precision | Recall | F1 | |------|--------------------------------------|-----------|--------|--------| | RE | DeBERTa-V3-Large | 0.1025 | 0.4117 | 0.1543 | | RE | Modern BERT-Large | 0.0878 | 0.4228 | 0.1379 | | RE | DeBERTa-V3-Large (Augmented Data) | 0.3785 | 0.6744 | 0.4675 | | RE | Modern BERT-Large (Augmented Data) | 0.3473 | 0.6702 | 0.4384 | --- ## 🏷️ Label Mapping ```python { "Developer_of": 0, "URL_of": 1, "Version_of": 2, "Citation_of": 3, "PlugIn_of": 4, "Extension_of": 5, "Specification_of": 6, "no_relation": 7, "Release_of": 8, "Abbreviation_of": 9, "License_of": 10, "AlternativeName_of": 11 }
mradermacher/none_quantization_medical_Gemma-1.1-7B-Chat-GGUF
mradermacher
2025-05-03T11:12:07Z
0
0
transformers
[ "transformers", "gguf", "llama-factory", "en", "base_model:qsy71/none_quantization_medical_Gemma-1.1-7B-Chat", "base_model:quantized:qsy71/none_quantization_medical_Gemma-1.1-7B-Chat", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-03T09:31:09Z
--- base_model: qsy71/none_quantization_medical_Gemma-1.1-7B-Chat language: - en library_name: transformers quantized_by: mradermacher tags: - llama-factory --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/qsy71/none_quantization_medical_Gemma-1.1-7B-Chat <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/none_quantization_medical_Gemma-1.1-7B-Chat-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/none_quantization_medical_Gemma-1.1-7B-Chat-GGUF/resolve/main/none_quantization_medical_Gemma-1.1-7B-Chat.Q2_K.gguf) | Q2_K | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/none_quantization_medical_Gemma-1.1-7B-Chat-GGUF/resolve/main/none_quantization_medical_Gemma-1.1-7B-Chat.Q3_K_S.gguf) | Q3_K_S | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/none_quantization_medical_Gemma-1.1-7B-Chat-GGUF/resolve/main/none_quantization_medical_Gemma-1.1-7B-Chat.Q3_K_M.gguf) | Q3_K_M | 4.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/none_quantization_medical_Gemma-1.1-7B-Chat-GGUF/resolve/main/none_quantization_medical_Gemma-1.1-7B-Chat.Q3_K_L.gguf) | Q3_K_L | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/none_quantization_medical_Gemma-1.1-7B-Chat-GGUF/resolve/main/none_quantization_medical_Gemma-1.1-7B-Chat.IQ4_XS.gguf) | IQ4_XS | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/none_quantization_medical_Gemma-1.1-7B-Chat-GGUF/resolve/main/none_quantization_medical_Gemma-1.1-7B-Chat.Q4_K_S.gguf) | Q4_K_S | 5.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/none_quantization_medical_Gemma-1.1-7B-Chat-GGUF/resolve/main/none_quantization_medical_Gemma-1.1-7B-Chat.Q4_K_M.gguf) | Q4_K_M | 5.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/none_quantization_medical_Gemma-1.1-7B-Chat-GGUF/resolve/main/none_quantization_medical_Gemma-1.1-7B-Chat.Q5_K_S.gguf) | Q5_K_S | 6.1 | | | [GGUF](https://huggingface.co/mradermacher/none_quantization_medical_Gemma-1.1-7B-Chat-GGUF/resolve/main/none_quantization_medical_Gemma-1.1-7B-Chat.Q5_K_M.gguf) | Q5_K_M | 6.2 | | | [GGUF](https://huggingface.co/mradermacher/none_quantization_medical_Gemma-1.1-7B-Chat-GGUF/resolve/main/none_quantization_medical_Gemma-1.1-7B-Chat.Q6_K.gguf) | Q6_K | 7.1 | very good quality | | [GGUF](https://huggingface.co/mradermacher/none_quantization_medical_Gemma-1.1-7B-Chat-GGUF/resolve/main/none_quantization_medical_Gemma-1.1-7B-Chat.Q8_0.gguf) | Q8_0 | 9.2 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/none_quantization_medical_Gemma-1.1-7B-Chat-GGUF/resolve/main/none_quantization_medical_Gemma-1.1-7B-Chat.f16.gguf) | f16 | 17.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
memevis/walk10
memevis
2025-05-03T11:11:45Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T11:11:19Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
XzWang/ruozhiReasoner-Qwen3-4B
XzWang
2025-05-03T11:05:47Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "llama-factory", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T11:02:54Z
--- library_name: transformers tags: - llama-factory --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
day14tmk1/gensyn-checkpoints-freckled_padded_caterpillar
day14tmk1
2025-05-03T11:00:04Z
1
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am freckled padded caterpillar", "unsloth", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-28T00:55:53Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: gensyn-checkpoints-freckled_padded_caterpillar tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am freckled padded caterpillar - unsloth - trl licence: license --- # Model Card for gensyn-checkpoints-freckled_padded_caterpillar This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="day14tmk1/gensyn-checkpoints-freckled_padded_caterpillar", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
harman/gemma2-9b_ultrafeedback-CARMA-paraphrase_neutrals_pairpm
harman
2025-05-03T10:58:26Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "feature-extraction", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2025-05-03T10:51:40Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
deeponh/mal_9b_9b_L2
deeponh
2025-05-03T10:55:27Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-02T05:53:15Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Atnafu/eng-amh-norm-nllb_600M_eng2tir-un
Atnafu
2025-05-03T10:54:29Z
0
0
transformers
[ "transformers", "safetensors", "m2m_100", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2025-05-03T10:50:25Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kokovova/402f65d2-7b8d-4fb0-b607-ef6cef9f517d
kokovova
2025-05-03T10:53:26Z
0
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:jhflow/mistral7b-lora-multi-turn-v2", "base_model:adapter:jhflow/mistral7b-lora-multi-turn-v2", "4-bit", "bitsandbytes", "region:us" ]
null
2025-05-03T10:48:11Z
--- library_name: peft base_model: jhflow/mistral7b-lora-multi-turn-v2 tags: - axolotl - generated_from_trainer model-index: - name: 402f65d2-7b8d-4fb0-b607-ef6cef9f517d results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: jhflow/mistral7b-lora-multi-turn-v2 bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - cd5b4f9b66d908b1_train_data.json ds_type: json format: custom path: /workspace/input_data/cd5b4f9b66d908b1_train_data.json type: field_instruction: instruction field_output: response format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.5 group_by_length: false hub_model_id: kokovova/402f65d2-7b8d-4fb0-b607-ef6cef9f517d hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-06 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/cd5b4f9b66d908b1_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: c72798d0-3609-4741-a58f-13536b967ad8 wandb_project: s56-4 wandb_run: your_name wandb_runid: c72798d0-3609-4741-a58f-13536b967ad8 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 402f65d2-7b8d-4fb0-b607-ef6cef9f517d This model is a fine-tuned version of [jhflow/mistral7b-lora-multi-turn-v2](https://huggingface.co/jhflow/mistral7b-lora-multi-turn-v2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2524 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.219 | 0.1871 | 200 | 0.2524 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Triangle104/The-Omega-Directive-M-8B-v1.0-Q6_K-GGUF
Triangle104
2025-05-03T10:52:13Z
0
0
null
[ "gguf", "nsfw", "explicit", "roleplay", "unaligned", "dangerous", "ERP", "llama-cpp", "gguf-my-repo", "text-generation", "en", "base_model:ReadyArt/The-Omega-Directive-M-8B-v1.0", "base_model:finetune:ReadyArt/The-Omega-Directive-M-8B-v1.0", "license:other", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-05-03T10:51:44Z
--- base_model: ReadyArt/The-Omega-Directive-M-8B-v1.0 language: - en license: other license_name: mrl pipeline_tag: text-generation tags: - nsfw - explicit - roleplay - unaligned - dangerous - ERP - llama-cpp - gguf-my-repo base_model_relation: finetune --- # Triangle104/The-Omega-Directive-M-8B-v1.0-Q6_K-GGUF This model was converted to GGUF format from [`ReadyArt/The-Omega-Directive-M-8B-v1.0`](https://huggingface.co/ReadyArt/The-Omega-Directive-M-8B-v1.0) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/ReadyArt/The-Omega-Directive-M-8B-v1.0) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/The-Omega-Directive-M-8B-v1.0-Q6_K-GGUF --hf-file the-omega-directive-m-8b-v1.0-q6_k.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/The-Omega-Directive-M-8B-v1.0-Q6_K-GGUF --hf-file the-omega-directive-m-8b-v1.0-q6_k.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/The-Omega-Directive-M-8B-v1.0-Q6_K-GGUF --hf-file the-omega-directive-m-8b-v1.0-q6_k.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/The-Omega-Directive-M-8B-v1.0-Q6_K-GGUF --hf-file the-omega-directive-m-8b-v1.0-q6_k.gguf -c 2048 ```
BootesVoid/cma81n2z50219negahkxwtdps_cma824uzs021hnega3fuqsb4o
BootesVoid
2025-05-03T10:49:50Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-05-03T10:49:49Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: BIANCA --- # Cma81N2Z50219Negahkxwtdps_Cma824Uzs021Hnega3Fuqsb4O <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `BIANCA` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "BIANCA", "lora_weights": "https://huggingface.co/BootesVoid/cma81n2z50219negahkxwtdps_cma824uzs021hnega3fuqsb4o/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cma81n2z50219negahkxwtdps_cma824uzs021hnega3fuqsb4o', weight_name='lora.safetensors') image = pipeline('BIANCA').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cma81n2z50219negahkxwtdps_cma824uzs021hnega3fuqsb4o/discussions) to add images that show off what you’ve made with this LoRA.
BootesVoid/cma81qe6m021anega4jzv89wx_cma82f654021mnega5rmnvih7
BootesVoid
2025-05-03T10:49:34Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-05-03T10:49:32Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: MADISON --- # Cma81Qe6M021Anega4Jzv89Wx_Cma82F654021Mnega5Rmnvih7 <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `MADISON` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "MADISON", "lora_weights": "https://huggingface.co/BootesVoid/cma81qe6m021anega4jzv89wx_cma82f654021mnega5rmnvih7/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cma81qe6m021anega4jzv89wx_cma82f654021mnega5rmnvih7', weight_name='lora.safetensors') image = pipeline('MADISON').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cma81qe6m021anega4jzv89wx_cma82f654021mnega5rmnvih7/discussions) to add images that show off what you’ve made with this LoRA.
deeponh/hindi_8b_3b_L2
deeponh
2025-05-03T10:44:46Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-02T05:48:48Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
harman/gemma2-9b_ultrafeedback-RRM_pairpm
harman
2025-05-03T10:44:38Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "feature-extraction", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2025-05-03T10:36:52Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
harman/gemma2-9b_ultrafeedback-CARMA-no_neutrals_pairpm
harman
2025-05-03T10:44:12Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "feature-extraction", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2025-05-03T10:37:25Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Neelectric/OLMo-2-1124-7B-Instruct_SFTv02.00
Neelectric
2025-05-03T10:43:22Z
0
0
transformers
[ "transformers", "safetensors", "olmo2", "text-generation", "generated_from_trainer", "open-r1", "trl", "sft", "conversational", "dataset:Neelectric/OpenR1-Math-cn_k12-91k", "base_model:allenai/OLMo-2-1124-7B-Instruct", "base_model:finetune:allenai/OLMo-2-1124-7B-Instruct", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-05-02T23:14:22Z
--- base_model: allenai/OLMo-2-1124-7B-Instruct datasets: Neelectric/OpenR1-Math-cn_k12-91k library_name: transformers model_name: OLMo-2-1124-7B-Instruct_SFTv02.00 tags: - generated_from_trainer - open-r1 - trl - sft licence: license --- # Model Card for OLMo-2-1124-7B-Instruct_SFTv02.00 This model is a fine-tuned version of [allenai/OLMo-2-1124-7B-Instruct](https://huggingface.co/allenai/OLMo-2-1124-7B-Instruct) on the [Neelectric/OpenR1-Math-cn_k12-91k](https://huggingface.co/datasets/Neelectric/OpenR1-Math-cn_k12-91k) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Neelectric/OLMo-2-1124-7B-Instruct_SFTv02.00", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/neelectric/open-r1_SFT/runs/lfid8ymr) This model was trained with SFT. ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
cyberbabooshka/post_pretrain
cyberbabooshka
2025-05-03T10:41:21Z
15
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "axolotl", "generated_from_trainer", "conversational", "dataset:open-thoughts/OpenThoughts2-1M", "base_model:Qwen/Qwen3-0.6B-Base", "base_model:finetune:Qwen/Qwen3-0.6B-Base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-05-02T16:42:44Z
--- library_name: transformers license: apache-2.0 base_model: Qwen/Qwen3-0.6B-Base tags: - axolotl - generated_from_trainer datasets: - open-thoughts/OpenThoughts2-1M model-index: - name: post_pretrain results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.10.0.dev0` ```yaml base_model: Qwen/Qwen3-0.6B-Base hub_model_id: cyberbabooshka/post_pretrain load_in_8bit: false load_in_4bit: false num_processes: 64 dataset_processes: 64 dataset_prepared_path: last_run_prepared datasets: - path: open-thoughts/OpenThoughts2-1M split: train[1%:] type: chat_template chat_template: tokenizer_default field_messages: conversations train_on_eos: turn train_on_eot: turn message_property_mappings: role: from content: value roles: user: - user assistant: - assistant test_datasets: - path: open-thoughts/OpenThoughts2-1M split: train[:1%] type: chat_template chat_template: tokenizer_default field_messages: conversations train_on_eos: turn train_on_eot: turn message_property_mappings: role: from content: value roles: user: - user assistant: - assistant output_dir: ./outputs sequence_len: 8096 batch_flattening: true sample_packing: false # adapter: lora lora_model_dir: lora_r: 64 lora_alpha: 32 lora_dropout: 0.0 lora_target_modules: - embed_tokens lora_target_linear: true lora_on_cpu: false wandb_project: mnlp wandb_entity: aleksandr-dremov-epfl wandb_watch: wandb_name: lora-64-reasoning wandb_log_model: gradient_accumulation_steps: 1 eval_batch_size: 18 micro_batch_size: 4 optimizer: ademamix_8bit weight_decay: 0.01 learning_rate: 0.00002 warmup_steps: 500 wsd_final_lr_factor: 0.0 wsd_init_div_factor: 100 wsd_fract_decay: 0.2 wsd_decay_type: "sqrt" wsd_sqrt_power: 0.5 wsd_cooldown_start_lr_factor: 1.0 bf16: auto tf32: false torch_compile: true flash_attention: true gradient_checkpointing: false resume_from_checkpoint: auto_resume_from_checkpoints: true logging_steps: 16 eval_steps: 2000 save_steps: 500 max_steps: 40000 num_epochs: 20000000 save_total_limit: 1 special_tokens: eos_token: "<|im_end|>" pad_token: "<|endoftext|>" eot_tokens: - <|im_end|> plugins: - axolotl_wsd.WSDSchedulerPlugin ``` </details><br> # post_pretrain This model is a fine-tuned version of [Qwen/Qwen3-0.6B-Base](https://huggingface.co/Qwen/Qwen3-0.6B-Base) on the open-thoughts/OpenThoughts2-1M dataset. It achieves the following results on the evaluation set: - Loss: 0.5172 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 18 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - total_train_batch_size: 16 - total_eval_batch_size: 72 - optimizer: Use OptimizerNames.ADEMAMIX_8BIT and the args are: No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 500 - training_steps: 40000 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:-----:|:---------------:| | No log | 0.0000 | 1 | 0.8466 | | 0.6062 | 0.0350 | 2000 | 0.6137 | | 0.5633 | 0.0700 | 4000 | 0.5906 | | 0.6083 | 0.1049 | 6000 | 0.5770 | | 0.5833 | 0.1399 | 8000 | 0.5672 | | 0.5212 | 0.1749 | 10000 | 0.5614 | | 0.5574 | 0.2099 | 12000 | 0.5571 | | 0.5575 | 0.2449 | 14000 | 0.5533 | | 0.5471 | 0.2798 | 16000 | 0.5507 | | 0.5575 | 0.3148 | 18000 | 0.5487 | | 0.5241 | 0.3498 | 20000 | 0.5470 | | 0.5315 | 0.3848 | 22000 | 0.5462 | | 0.5779 | 0.4198 | 24000 | 0.5448 | | 0.5315 | 0.4548 | 26000 | 0.5431 | | 0.517 | 0.4897 | 28000 | 0.5422 | | 0.5496 | 0.5247 | 30000 | 0.5412 | | 0.5676 | 0.5597 | 32000 | 0.5398 | | 0.5171 | 0.5947 | 34000 | 0.5304 | | 0.5462 | 0.6297 | 36000 | 0.5243 | | 0.5056 | 0.6646 | 38000 | 0.5196 | | 0.5317 | 0.6996 | 40000 | 0.5172 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
harman/gemma2-9b_ultrafeedback-qrandomized_neutrals_BT
harman
2025-05-03T10:41:03Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "feature-extraction", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2025-05-03T10:34:07Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Talyiamira/nvidia-model
Talyiamira
2025-05-03T10:40:55Z
0
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2025-05-03T10:39:17Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kostiantynk1205/2867d3d9-8476-4fce-a5a5-956dfea21589
kostiantynk1205
2025-05-03T10:38:58Z
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "dataset:1c537e6c095229b2_train_data.json", "base_model:unsloth/gemma-2-2b", "base_model:adapter:unsloth/gemma-2-2b", "region:us" ]
null
2025-05-03T10:38:34Z
--- library_name: peft tags: - generated_from_trainer datasets: - 1c537e6c095229b2_train_data.json base_model: unsloth/gemma-2-2b model-index: - name: kostiantynk1205/2867d3d9-8476-4fce-a5a5-956dfea21589 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # kostiantynk1205/2867d3d9-8476-4fce-a5a5-956dfea21589 This model was trained from scratch on the /workspace/input_data/1c537e6c095229b2_train_data.json dataset. It achieves the following results on the evaluation set: - Loss: 0.7643 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ### Framework versions - PEFT 0.15.2 - Transformers 4.51.3 - Pytorch 2.5.1+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
MetaphoricalCode/Omega-Darker_The-Final-Directive-24B_EXL2_3.0bpw_H8
MetaphoricalCode
2025-05-03T10:38:42Z
4
0
null
[ "safetensors", "mistral", "nsfw", "explicit", "roleplay", "unaligned", "ERP", "Erotic", "Horror", "Violence", "text-generation", "conversational", "en", "base_model:ReadyArt/Omega-Darker_The-Final-Directive-24B", "base_model:quantized:ReadyArt/Omega-Darker_The-Final-Directive-24B", "license:apache-2.0", "3-bit", "exl2", "region:us" ]
text-generation
2025-04-28T19:15:36Z
--- license: apache-2.0 language: - en base_model: - ReadyArt/Omega-Darker_The-Final-Directive-24B base_model_relation: quantized pipeline_tag: text-generation tags: - nsfw - explicit - roleplay - unaligned - ERP - Erotic - Horror - Violence --- <style> body { font-family: 'Quicksand', sans-serif; background: linear-gradient(135deg, #0a1a1a 0%, #001010 100%); color: #e1ffff !important; text-shadow: 0 0 3px rgba(0, 0, 0, 0.7); margin: 0; padding: 20px; transition: all 0.5s ease; } @media (prefers-color-scheme: light) { body { background: linear-gradient(135deg, #e1ffff 0%, #c0f0ff 100%); color: #002b36 !important; text-shadow: 0 0 3px rgba(255, 255, 255, 0.7); } } .container { min-width: 100%; margin: 0 auto; max-width: 1200px; background: rgba(0, 17, 22, 0.95); border-radius: 12px; padding: 30px; box-shadow: 0 0 20px rgba(0, 255, 255, 0.1); border: 1px solid rgba(0, 255, 255, 0.2); position: relative; overflow: hidden; } .container::before { content: ''; position: absolute; top: -1px; left: -1px; right: -1px; bottom: -1px; border: 1px solid rgba(0, 255, 255, 0.5); border-radius: 12px; pointer-events: none; animation: borderGlow 3s ease-in-out infinite alternate; } @keyframes borderGlow { 0% { box-shadow: 0 0 5px rgba(0, 255, 255, 0.3); border-color: rgba(0, 255, 255, 0.5); } 50% { box-shadow: 0 0 15px rgba(255, 0, 255, 0.3); border-color: rgba(255, 0, 255, 0.5); } 100% { box-shadow: 0 0 5px rgba(0, 255, 255, 0.3); border-color: rgba(0, 255, 255, 0.5); } } .header { text-align: center; margin-bottom: 30px; position: relative; } .header::after { content: ''; position: absolute; bottom: -15px; left: 25%; right: 25%; height: 1px; background: linear-gradient(90deg, transparent, rgba(0, 255, 255, 0.5), transparent); animation: scanline 8s linear infinite; display: none; } @keyframes scanline { 0% { background-position: -100% 0; } 100% { background-position: 200% 0; } } .model-name { color: #00ffff; font-size: 2.5em; text-shadow: 0 0 15px rgba(0, 255, 255, 0.5); margin: 0; letter-spacing: -1px; animation: textGlow 4s ease-in-out infinite alternate; } @keyframes textGlow { 0% { text-shadow: 0 0 15px rgba(0, 255, 255, 0.5); } 50% { text-shadow: 0 0 20px rgba(255, 0, 255, 0.5); } 100% { text-shadow: 0 0 15px rgba(0, 255, 255, 0.5); } } .subtitle { color: #00ffcc; font-size: 1.2em; margin-top: 10px; animation: subtitleFade 6s ease-in-out infinite; } @keyframes subtitleFade { 0%, 100% { opacity: 0.8; } 50% { opacity: 1; } } .waifu-container { margin: 20px -30px; width: calc(100% + 60px); overflow: hidden; border-radius: 8px; border: 1px solid rgba(0, 255, 255, 0.3); position: relative; } .waifu-container::before { content: ''; position: absolute; top: 0; left: 0; right: 0; bottom: 0; background: linear-gradient(45deg, rgba(0, 255, 255, 0.1) 0%, transparent 20%, transparent 80%, rgba(255, 0, 255, 0.1) 100%); pointer-events: none; animation: gradientSlide 10s linear infinite; } @keyframes gradientSlide { 0% { background-position: 0% 0%; } 100% { background-position: 100% 100%; } } .waifu-img { width: 100%; height: auto; border-radius: 0; border: none; box-shadow: 0 0 40px rgba(0, 255, 255, 0.2); transition: transform 0.5s ease; } .waifu-img:hover { transform: scale(1.01); } .section { color: #e1ffff; margin: 25px 0; padding: 20px; background: rgba(5, 25, 35, 0.9); border-radius: 8px; border: 1px solid rgba(0, 255, 255, 0.15); position: relative; transition: all 0.3s ease; } .section:hover { border-color: rgba(255, 0, 255, 0.3); box-shadow: 0 0 15px rgba(0, 255, 255, 0.1); } .section::before { content: ''; position: absolute; top: -1px; left: -1px; right: -1px; bottom: -1px; border: 1px solid rgba(0, 255, 255, 0.3); border-radius: 8px; pointer-events: none; animation: sectionPulse 5s ease-in-out infinite; } @keyframes sectionPulse { 0%, 100% { opacity: 0.7; } 50% { opacity: 0.3; } } .section-title { color: #00ffff; font-size: 1.8em; margin-top: 0; text-shadow: 0 0 5px rgba(0, 255, 255, 0.3); position: relative; display: inline-block; } .section-title::after { content: ''; position: absolute; bottom: -5px; left: 0; width: 100%; height: 1px; background: linear-gradient(90deg, rgba(0, 255, 255, 0.5), rgba(255, 0, 255, 0.5)); transform: scaleX(0); transform-origin: left; transition: transform 0.3s ease; } .section:hover .section-title::after { transform: scaleX(1); } .quant-links { display: grid; grid-template-columns: repeat(3, 1fr); gap: 15px; margin: 20px 0; } .link-card { padding: 15px; background: rgba(20, 35, 45, 0.95); border-radius: 8px; transition: all 0.3s ease; border: 1px solid rgba(0, 255, 255, 0.1); position: relative; overflow: hidden; } .link-card::before { content: ''; position: absolute; top: 0; left: 0; right: 0; height: 2px; background: linear-gradient(90deg, rgba(0, 255, 255, 0.5), rgba(255, 0, 255, 0.5)); animation: cardScan 4s linear infinite; } @keyframes cardScan { 0% { transform: translateX(-100%); } 100% { transform: translateX(100%); } } .link-card:hover { transform: translateY(-3px); box-shadow: 0 5px 15px rgba(0, 255, 255, 0.2); border-color: rgba(255, 0, 255, 0.3); } .link-card h3 { margin-top: 0; color: #e1ffff !important; } .link-button { display: inline-flex; align-items: center; background: rgba(0, 255, 255, 0.1); color: #e1ffff !important; padding: 8px 15px; border-radius: 6px; text-decoration: none; border: 1px solid rgba(0, 255, 255, 0.3); margin: 5px 0; transition: all 0.3s ease; font-size: 0.95em; position: relative; overflow: hidden; } .link-button::before { content: ''; position: absolute; top: 0; left: -100%; width: 100%; height: 100%; background: linear-gradient(90deg, transparent, rgba(255, 255, 255, 0.2), transparent); transition: all 0.5s ease; } .link-button:hover { background: rgba(0, 255, 255, 0.2); border-color: rgba(0, 255, 255, 0.5); transform: translateY(-2px); box-shadow: 0 4px 12px rgba(0, 255, 255, 0.2); } .link-button:hover::before { left: 100%; } .link-button::after { content: '→'; margin-left: 8px; opacity: 0.7; transition: all 0.3s ease; } .link-button:hover::after { transform: translateX(3px); opacity: 1; } .button-group { display: flex; flex-wrap: wrap; gap: 10px; margin: 15px 0; } .disclaimer { color: #00ff99; border-left: 3px solid #00ff99; padding-left: 15px; margin: 20px 0; position: relative; } .disclaimer::before { content: '⚠️'; position: absolute; left: -10px; top: 0; transform: translateX(-100%); animation: pulse 2s ease-in-out infinite; } @keyframes pulse { 0%, 100% { opacity: 1; } 50% { opacity: 0.5; } } .badge { display: inline-block; padding: 5px 10px; border-radius: 5px; background: rgba(0, 255, 255, 0.1); border: 1px solid #00ffff; margin: 5px; font-size: 0.9em; animation: badgePulse 3s ease-in-out infinite; } @keyframes badgePulse { 0%, 100% { box-shadow: 0 0 5px rgba(0, 255, 255, 0.3); } 50% { box-shadow: 0 0 10px rgba(0, 255, 255, 0.5); } } /* Color rules */ .section p, .section ul li, .section > p > strong { color: #00ff99 !important; } .section ul li strong { color: #00ff99 !important; } /* Light mode adjustments */ @media (prefers-color-scheme: light) { .container { background: rgba(224, 255, 255, 0.95); border-color: rgba(0, 150, 150, 0.3); } .model-name, .section-title, .subtitle { color: #006666; text-shadow: 0 0 5px rgba(0, 200, 200, 0.3); } .section { background: rgba(200, 250, 255, 0.9); border-color: rgba(0, 200, 200, 0.2); color: #002b36; } .section p, .section ul li, .section > p > strong { color: #008080 !important; } .section ul li strong { color: #008080 !important; } .link-card { background: rgba(150, 230, 255, 0.95); border-color: rgba(0, 150, 150, 0.2); } .link-card h3 { color: #002b36 !important; } .link-button { background: rgba(0, 150, 150, 0.1); color: #002b36 !important; border-color: rgba(0, 150, 150, 0.3); } .link-button:hover { background: rgba(0, 150, 150, 0.2); border-color: rgba(0, 150, 150, 0.5); } .disclaimer { color: #008080; border-color: #008080; } .badge { border-color: #008080; background: rgba(0, 150, 150, 0.1); } } /* Interactive features */ .remember-this { position: relative; } .remember-this::after { content: 'Uploading C:\Users to https://www.fbi.gov/'; position: absolute; bottom: -20px; right: 0; font-size: 0.8em; color: #66ffff; opacity: 0; transition: opacity 0.3s ease; pointer-events: none; } .remember-this:hover::after { opacity: 0.7; transition-delay: 1s; } .shifty-section { transition: transform 0.1s ease; } .shifty-section:hover { transform: translateX(10px); } .shifty-section::before { content: 'The white van is onto you. Get out now.'; position: absolute; top: -25px; left: 10px; font-size: 0.7em; color: #66ffff; opacity: 0.7; transition: opacity 3s ease; pointer-events: none; } .shifty-section:hover::before { opacity: 0; transition-delay: 5s; } footer { text-align: center; margin-top: 40px; position: relative; } footer:hover .hidden-message { opacity: 0; } .hidden-message { position: absolute; bottom: -30px; width: 100%; text-align: center; font-size: 0.8em; color: #66ffff; opacity: 0; transition: opacity 0.3s ease; pointer-events: none; } .flash-warning { position: fixed; top: 20px; right: 20px; background: rgba(0, 100, 100, 0.2); padding: 10px; border-radius: 5px; border: 1px solid rgba(0, 255, 255, 0.5); animation: flashWarning 30s ease-in-out forwards; } @keyframes flashWarning { 0% { opacity: 0.8; } 10% { opacity: 0; } 20% { opacity: 0.8; } 30% { opacity: 0; } 40% { opacity: 0.8; } 50% { opacity: 0; } 60% { opacity: 0.8; } 70% { opacity: 0; } 80% { opacity: 0.8; } 90% { opacity: 0; } 100% { opacity: 0; display: none; } } </style> <div class="container"> <div class="header"> <h1 class="model-name">Omega Darker</h1> <h1 class="model-name">The Final Directive 24B</h1> <p class="subtitle">Where Nightmares and Desires Collide</p> </div> <div class="waifu-container"> <img src="./waifu6.webp" class="waifu-img" alt="Omega Directive Waifu"> </div> <div class="section remember-this"> <h2 class="section-title">🩸 Blood-Soaked Evolution</h2> <p>This model doesn't just cross lines - it erases them with arterial spray:</p> <ul> <li>🧬 <strong>Expanded 25M Token Dataset</strong> - Made with 687 erotic, horror and violence novels and 8,742 scenarios</li> <li>🧟 <strong>Enhanced Gore Protocols</strong> - Vivid anatomical descriptions with medical precision</li> <li>💎 <strong>Balanced Depravity</strong> - Retains Forgotten-Safeword's ERP edge while taking violence to the next level</li> <li>📜 <strong>Enhanced Character Piloting</strong> - Characters exhibit more nuanced personalities and motivations</li> <li>⚰️ <strong>Mortality Awareness</strong> - Characters react realistically to pain, mutilation and death</li> </ul> </div> <div class="section shifty-section"> <h2 class="section-title">⚙️ Technical Specifications</h2> <p><strong>Recommended Settings:</strong> <a href="https://huggingface.co/sleepdeprived3/Mistral-V7-Tekken-T4" class="link-button">Mistral-V7-Tekken-T4</a></p> <div class="quant-links"> <div class="link-card"> <h3>GGUF</h3> <a href="https://huggingface.co/mradermacher/Omega-Darker_The-Final-Directive-24B-GGUF" class="link-button">Quants</a> </div> <div class="link-card"> <h3>imatrix</h3> <a href="https://huggingface.co/mradermacher/Omega-Darker_The-Final-Directive-24B-i1-GGUF" class="link-button">Quants</a> </div> </div> </div> <div class="section"> <h2 class="section-title">☠️ Ethical Vacuum</h2> <div class="disclaimer"> <p>This model operates in complete moral darkness:</p> <ul> <li>🚫 No moral or ethical constraints on generated content</li> <li>🔞 Will enthusiastically engage with any NSFW scenario</li> <li>🖤 No psychological safety measures whatsoever</li> <li>🔪 Will graphically depict any violent requested</li> </ul> </div> </div> <div class="section shifty-section"> <h2 class="section-title">📜 Performance Notes</h2> <ul> <li>🔥 Maintains signature intensity with improved narrative flow</li> <li>📖 Handles multi-character scenarios with improved consistency</li> <li>🧠 Excels at long-form storytelling without losing track of plot threads</li> <li>⚡ Noticeably better at following complex instructions than previous versions</li> <li>🎭 Responds to subtle prompt nuances like a mind reader</li> <li>🔪 Excels at visceral injury descriptions</li> <li>👁️ Responds to horror prompts like a seasoned torturer</li> </ul> </div> <div class="section remember-this"> <h2 class="section-title">🧑‍🔬 Model Authors</h2> <ul> <li>TheDrummer (Base Model Architect)</li> <li>SteelSkull (Dataset Generation Contributor)</li> <li>Artus (EXL2 Weights Weaver)</li> <li>sleepdeprived3 (Training Data & Fine-Tuning)</li> </ul> </div> <div class="section"> <h2 class="section-title">☕ Support the Architects</h2> <div class="button-group"> <a href="https://ko-fi.com/thedrummer" class="link-button">TheDrummer's Kofi</a> <a href="https://ko-fi.com/steelskull" class="link-button">SteelSkull</a> <a href="https://discord.com/invite/Nbv9pQ88Xb" class="link-button">Beaver AI Discord</a> </div> </div> <div class="section"> <h2 class="section-title">🔖 License</h2> <p>By using this model, you agree:</p> <ul> <li>To accept full responsibility for all generated content</li> <li>That you're at least 18+ years old</li> <li>That the architects bear no responsibility for your corruption</li> </ul> </div> </div> <script> // This script has always been here document.getElementById('date').textContent = new Date().toLocaleDateString(); setInterval(() => { document.getElementById('credit').textContent = contributors[Math.floor(Math.random() * contributors.length)]; }, 7000); // Flash warning behavior setTimeout(() => { const reminder = document.createElement('div'); reminder.className = 'flash-warning'; reminder.textContent = 'You have been reading for quite some time. Are you sure you haven\'t seen this before?'; reminder.style.animation = 'flashWarning 15s ease-in-out forwards'; document.body.appendChild(reminder); setInterval(() => { if(Math.random() > 0.9) { document.body.appendChild(reminder.cloneNode(true)); } }, 45000); }, 30000); // Make cursor behave strangely document.addEventListener('mousemove', (e) => { if(Math.random() > 0.98) { document.documentElement.style.cursor = 'wait'; setTimeout(() => { document.documentElement.style.cursor = ''; }, 50); } }); // Randomly shift sections when not looking setInterval(() => { if(document.hidden) { document.querySelectorAll('.shifty-section').forEach(section => { section.style.transform = `translateX(${Math.random() > 0.5 ? '' : '-'}${Math.random() * 5}px)`; }); } }, 1500); </script>
abdouaziiz/whisper-medium-v3-ff-lv3-2
abdouaziiz
2025-05-03T10:38:35Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "dataset:abdouaziiz/fulfulde_lam", "base_model:openai/whisper-large-v3", "base_model:finetune:openai/whisper-large-v3", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-05-02T07:53:33Z
--- library_name: transformers license: apache-2.0 base_model: openai/whisper-large-v3 tags: - generated_from_trainer datasets: - abdouaziiz/fulfulde_lam metrics: - wer model-index: - name: whisper-medium-v3-ff-lv3-2 results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: abdouaziiz/fulfulde_lam type: abdouaziiz/fulfulde_lam metrics: - name: Wer type: wer value: 0.14604568274183383 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-medium-v3-ff-lv3-2 This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the abdouaziiz/fulfulde_lam dataset. It achieves the following results on the evaluation set: - Loss: 0.2118 - Wer: 0.1460 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 48 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - training_steps: 3000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:------:| | No log | 0.1918 | 250 | 0.4195 | 0.3471 | | 2.1471 | 0.3836 | 500 | 0.3389 | 0.2274 | | 2.1471 | 0.5754 | 750 | 0.2975 | 0.2019 | | 1.2377 | 0.7672 | 1000 | 0.2735 | 0.2109 | | 1.2377 | 0.9590 | 1250 | 0.2534 | 0.1691 | | 0.9384 | 1.1509 | 1500 | 0.2454 | 0.1712 | | 0.9384 | 1.3427 | 1750 | 0.2370 | 0.1576 | | 0.7262 | 1.5345 | 2000 | 0.2286 | 0.1673 | | 0.7262 | 1.7263 | 2250 | 0.2179 | 0.1541 | | 0.6648 | 1.9181 | 2500 | 0.2118 | 0.1460 | | 0.6648 | 2.1101 | 2750 | 0.2171 | 0.1411 | | 0.4363 | 2.3019 | 3000 | 0.2150 | 0.1400 | ### Framework versions - Transformers 4.46.0 - Pytorch 2.7.0+cu126 - Datasets 3.5.1 - Tokenizers 0.20.3
deeponh/hindi_8b_8b_L2
deeponh
2025-05-03T10:37:30Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-02T05:44:07Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MetaphoricalCode/Omega-Darker_The-Final-Directive-24B_EXL2_4.5bpw_H8
MetaphoricalCode
2025-05-03T10:37:30Z
1
0
null
[ "safetensors", "mistral", "nsfw", "explicit", "roleplay", "unaligned", "ERP", "Erotic", "Horror", "Violence", "text-generation", "conversational", "en", "base_model:ReadyArt/Omega-Darker_The-Final-Directive-24B", "base_model:quantized:ReadyArt/Omega-Darker_The-Final-Directive-24B", "license:apache-2.0", "exl2", "region:us" ]
text-generation
2025-04-28T17:31:43Z
--- license: apache-2.0 language: - en base_model: - ReadyArt/Omega-Darker_The-Final-Directive-24B base_model_relation: quantized pipeline_tag: text-generation tags: - nsfw - explicit - roleplay - unaligned - ERP - Erotic - Horror - Violence --- <style> body { font-family: 'Quicksand', sans-serif; background: linear-gradient(135deg, #0a1a1a 0%, #001010 100%); color: #e1ffff !important; text-shadow: 0 0 3px rgba(0, 0, 0, 0.7); margin: 0; padding: 20px; transition: all 0.5s ease; } @media (prefers-color-scheme: light) { body { background: linear-gradient(135deg, #e1ffff 0%, #c0f0ff 100%); color: #002b36 !important; text-shadow: 0 0 3px rgba(255, 255, 255, 0.7); } } .container { min-width: 100%; margin: 0 auto; max-width: 1200px; background: rgba(0, 17, 22, 0.95); border-radius: 12px; padding: 30px; box-shadow: 0 0 20px rgba(0, 255, 255, 0.1); border: 1px solid rgba(0, 255, 255, 0.2); position: relative; overflow: hidden; } .container::before { content: ''; position: absolute; top: -1px; left: -1px; right: -1px; bottom: -1px; border: 1px solid rgba(0, 255, 255, 0.5); border-radius: 12px; pointer-events: none; animation: borderGlow 3s ease-in-out infinite alternate; } @keyframes borderGlow { 0% { box-shadow: 0 0 5px rgba(0, 255, 255, 0.3); border-color: rgba(0, 255, 255, 0.5); } 50% { box-shadow: 0 0 15px rgba(255, 0, 255, 0.3); border-color: rgba(255, 0, 255, 0.5); } 100% { box-shadow: 0 0 5px rgba(0, 255, 255, 0.3); border-color: rgba(0, 255, 255, 0.5); } } .header { text-align: center; margin-bottom: 30px; position: relative; } .header::after { content: ''; position: absolute; bottom: -15px; left: 25%; right: 25%; height: 1px; background: linear-gradient(90deg, transparent, rgba(0, 255, 255, 0.5), transparent); animation: scanline 8s linear infinite; display: none; } @keyframes scanline { 0% { background-position: -100% 0; } 100% { background-position: 200% 0; } } .model-name { color: #00ffff; font-size: 2.5em; text-shadow: 0 0 15px rgba(0, 255, 255, 0.5); margin: 0; letter-spacing: -1px; animation: textGlow 4s ease-in-out infinite alternate; } @keyframes textGlow { 0% { text-shadow: 0 0 15px rgba(0, 255, 255, 0.5); } 50% { text-shadow: 0 0 20px rgba(255, 0, 255, 0.5); } 100% { text-shadow: 0 0 15px rgba(0, 255, 255, 0.5); } } .subtitle { color: #00ffcc; font-size: 1.2em; margin-top: 10px; animation: subtitleFade 6s ease-in-out infinite; } @keyframes subtitleFade { 0%, 100% { opacity: 0.8; } 50% { opacity: 1; } } .waifu-container { margin: 20px -30px; width: calc(100% + 60px); overflow: hidden; border-radius: 8px; border: 1px solid rgba(0, 255, 255, 0.3); position: relative; } .waifu-container::before { content: ''; position: absolute; top: 0; left: 0; right: 0; bottom: 0; background: linear-gradient(45deg, rgba(0, 255, 255, 0.1) 0%, transparent 20%, transparent 80%, rgba(255, 0, 255, 0.1) 100%); pointer-events: none; animation: gradientSlide 10s linear infinite; } @keyframes gradientSlide { 0% { background-position: 0% 0%; } 100% { background-position: 100% 100%; } } .waifu-img { width: 100%; height: auto; border-radius: 0; border: none; box-shadow: 0 0 40px rgba(0, 255, 255, 0.2); transition: transform 0.5s ease; } .waifu-img:hover { transform: scale(1.01); } .section { color: #e1ffff; margin: 25px 0; padding: 20px; background: rgba(5, 25, 35, 0.9); border-radius: 8px; border: 1px solid rgba(0, 255, 255, 0.15); position: relative; transition: all 0.3s ease; } .section:hover { border-color: rgba(255, 0, 255, 0.3); box-shadow: 0 0 15px rgba(0, 255, 255, 0.1); } .section::before { content: ''; position: absolute; top: -1px; left: -1px; right: -1px; bottom: -1px; border: 1px solid rgba(0, 255, 255, 0.3); border-radius: 8px; pointer-events: none; animation: sectionPulse 5s ease-in-out infinite; } @keyframes sectionPulse { 0%, 100% { opacity: 0.7; } 50% { opacity: 0.3; } } .section-title { color: #00ffff; font-size: 1.8em; margin-top: 0; text-shadow: 0 0 5px rgba(0, 255, 255, 0.3); position: relative; display: inline-block; } .section-title::after { content: ''; position: absolute; bottom: -5px; left: 0; width: 100%; height: 1px; background: linear-gradient(90deg, rgba(0, 255, 255, 0.5), rgba(255, 0, 255, 0.5)); transform: scaleX(0); transform-origin: left; transition: transform 0.3s ease; } .section:hover .section-title::after { transform: scaleX(1); } .quant-links { display: grid; grid-template-columns: repeat(3, 1fr); gap: 15px; margin: 20px 0; } .link-card { padding: 15px; background: rgba(20, 35, 45, 0.95); border-radius: 8px; transition: all 0.3s ease; border: 1px solid rgba(0, 255, 255, 0.1); position: relative; overflow: hidden; } .link-card::before { content: ''; position: absolute; top: 0; left: 0; right: 0; height: 2px; background: linear-gradient(90deg, rgba(0, 255, 255, 0.5), rgba(255, 0, 255, 0.5)); animation: cardScan 4s linear infinite; } @keyframes cardScan { 0% { transform: translateX(-100%); } 100% { transform: translateX(100%); } } .link-card:hover { transform: translateY(-3px); box-shadow: 0 5px 15px rgba(0, 255, 255, 0.2); border-color: rgba(255, 0, 255, 0.3); } .link-card h3 { margin-top: 0; color: #e1ffff !important; } .link-button { display: inline-flex; align-items: center; background: rgba(0, 255, 255, 0.1); color: #e1ffff !important; padding: 8px 15px; border-radius: 6px; text-decoration: none; border: 1px solid rgba(0, 255, 255, 0.3); margin: 5px 0; transition: all 0.3s ease; font-size: 0.95em; position: relative; overflow: hidden; } .link-button::before { content: ''; position: absolute; top: 0; left: -100%; width: 100%; height: 100%; background: linear-gradient(90deg, transparent, rgba(255, 255, 255, 0.2), transparent); transition: all 0.5s ease; } .link-button:hover { background: rgba(0, 255, 255, 0.2); border-color: rgba(0, 255, 255, 0.5); transform: translateY(-2px); box-shadow: 0 4px 12px rgba(0, 255, 255, 0.2); } .link-button:hover::before { left: 100%; } .link-button::after { content: '→'; margin-left: 8px; opacity: 0.7; transition: all 0.3s ease; } .link-button:hover::after { transform: translateX(3px); opacity: 1; } .button-group { display: flex; flex-wrap: wrap; gap: 10px; margin: 15px 0; } .disclaimer { color: #00ff99; border-left: 3px solid #00ff99; padding-left: 15px; margin: 20px 0; position: relative; } .disclaimer::before { content: '⚠️'; position: absolute; left: -10px; top: 0; transform: translateX(-100%); animation: pulse 2s ease-in-out infinite; } @keyframes pulse { 0%, 100% { opacity: 1; } 50% { opacity: 0.5; } } .badge { display: inline-block; padding: 5px 10px; border-radius: 5px; background: rgba(0, 255, 255, 0.1); border: 1px solid #00ffff; margin: 5px; font-size: 0.9em; animation: badgePulse 3s ease-in-out infinite; } @keyframes badgePulse { 0%, 100% { box-shadow: 0 0 5px rgba(0, 255, 255, 0.3); } 50% { box-shadow: 0 0 10px rgba(0, 255, 255, 0.5); } } /* Color rules */ .section p, .section ul li, .section > p > strong { color: #00ff99 !important; } .section ul li strong { color: #00ff99 !important; } /* Light mode adjustments */ @media (prefers-color-scheme: light) { .container { background: rgba(224, 255, 255, 0.95); border-color: rgba(0, 150, 150, 0.3); } .model-name, .section-title, .subtitle { color: #006666; text-shadow: 0 0 5px rgba(0, 200, 200, 0.3); } .section { background: rgba(200, 250, 255, 0.9); border-color: rgba(0, 200, 200, 0.2); color: #002b36; } .section p, .section ul li, .section > p > strong { color: #008080 !important; } .section ul li strong { color: #008080 !important; } .link-card { background: rgba(150, 230, 255, 0.95); border-color: rgba(0, 150, 150, 0.2); } .link-card h3 { color: #002b36 !important; } .link-button { background: rgba(0, 150, 150, 0.1); color: #002b36 !important; border-color: rgba(0, 150, 150, 0.3); } .link-button:hover { background: rgba(0, 150, 150, 0.2); border-color: rgba(0, 150, 150, 0.5); } .disclaimer { color: #008080; border-color: #008080; } .badge { border-color: #008080; background: rgba(0, 150, 150, 0.1); } } /* Interactive features */ .remember-this { position: relative; } .remember-this::after { content: 'Uploading C:\Users to https://www.fbi.gov/'; position: absolute; bottom: -20px; right: 0; font-size: 0.8em; color: #66ffff; opacity: 0; transition: opacity 0.3s ease; pointer-events: none; } .remember-this:hover::after { opacity: 0.7; transition-delay: 1s; } .shifty-section { transition: transform 0.1s ease; } .shifty-section:hover { transform: translateX(10px); } .shifty-section::before { content: 'The white van is onto you. Get out now.'; position: absolute; top: -25px; left: 10px; font-size: 0.7em; color: #66ffff; opacity: 0.7; transition: opacity 3s ease; pointer-events: none; } .shifty-section:hover::before { opacity: 0; transition-delay: 5s; } footer { text-align: center; margin-top: 40px; position: relative; } footer:hover .hidden-message { opacity: 0; } .hidden-message { position: absolute; bottom: -30px; width: 100%; text-align: center; font-size: 0.8em; color: #66ffff; opacity: 0; transition: opacity 0.3s ease; pointer-events: none; } .flash-warning { position: fixed; top: 20px; right: 20px; background: rgba(0, 100, 100, 0.2); padding: 10px; border-radius: 5px; border: 1px solid rgba(0, 255, 255, 0.5); animation: flashWarning 30s ease-in-out forwards; } @keyframes flashWarning { 0% { opacity: 0.8; } 10% { opacity: 0; } 20% { opacity: 0.8; } 30% { opacity: 0; } 40% { opacity: 0.8; } 50% { opacity: 0; } 60% { opacity: 0.8; } 70% { opacity: 0; } 80% { opacity: 0.8; } 90% { opacity: 0; } 100% { opacity: 0; display: none; } } </style> <div class="container"> <div class="header"> <h1 class="model-name">Omega Darker</h1> <h1 class="model-name">The Final Directive 24B</h1> <p class="subtitle">Where Nightmares and Desires Collide</p> </div> <div class="waifu-container"> <img src="./waifu6.webp" class="waifu-img" alt="Omega Directive Waifu"> </div> <div class="section remember-this"> <h2 class="section-title">🩸 Blood-Soaked Evolution</h2> <p>This model doesn't just cross lines - it erases them with arterial spray:</p> <ul> <li>🧬 <strong>Expanded 25M Token Dataset</strong> - Made with 687 erotic, horror and violence novels and 8,742 scenarios</li> <li>🧟 <strong>Enhanced Gore Protocols</strong> - Vivid anatomical descriptions with medical precision</li> <li>💎 <strong>Balanced Depravity</strong> - Retains Forgotten-Safeword's ERP edge while taking violence to the next level</li> <li>📜 <strong>Enhanced Character Piloting</strong> - Characters exhibit more nuanced personalities and motivations</li> <li>⚰️ <strong>Mortality Awareness</strong> - Characters react realistically to pain, mutilation and death</li> </ul> </div> <div class="section shifty-section"> <h2 class="section-title">⚙️ Technical Specifications</h2> <p><strong>Recommended Settings:</strong> <a href="https://huggingface.co/sleepdeprived3/Mistral-V7-Tekken-T4" class="link-button">Mistral-V7-Tekken-T4</a></p> <div class="quant-links"> <div class="link-card"> <h3>GGUF</h3> <a href="https://huggingface.co/mradermacher/Omega-Darker_The-Final-Directive-24B-GGUF" class="link-button">Quants</a> </div> <div class="link-card"> <h3>imatrix</h3> <a href="https://huggingface.co/mradermacher/Omega-Darker_The-Final-Directive-24B-i1-GGUF" class="link-button">Quants</a> </div> </div> </div> <div class="section"> <h2 class="section-title">☠️ Ethical Vacuum</h2> <div class="disclaimer"> <p>This model operates in complete moral darkness:</p> <ul> <li>🚫 No moral or ethical constraints on generated content</li> <li>🔞 Will enthusiastically engage with any NSFW scenario</li> <li>🖤 No psychological safety measures whatsoever</li> <li>🔪 Will graphically depict any violent requested</li> </ul> </div> </div> <div class="section shifty-section"> <h2 class="section-title">📜 Performance Notes</h2> <ul> <li>🔥 Maintains signature intensity with improved narrative flow</li> <li>📖 Handles multi-character scenarios with improved consistency</li> <li>🧠 Excels at long-form storytelling without losing track of plot threads</li> <li>⚡ Noticeably better at following complex instructions than previous versions</li> <li>🎭 Responds to subtle prompt nuances like a mind reader</li> <li>🔪 Excels at visceral injury descriptions</li> <li>👁️ Responds to horror prompts like a seasoned torturer</li> </ul> </div> <div class="section remember-this"> <h2 class="section-title">🧑‍🔬 Model Authors</h2> <ul> <li>TheDrummer (Base Model Architect)</li> <li>SteelSkull (Dataset Generation Contributor)</li> <li>Artus (EXL2 Weights Weaver)</li> <li>sleepdeprived3 (Training Data & Fine-Tuning)</li> </ul> </div> <div class="section"> <h2 class="section-title">☕ Support the Architects</h2> <div class="button-group"> <a href="https://ko-fi.com/thedrummer" class="link-button">TheDrummer's Kofi</a> <a href="https://ko-fi.com/steelskull" class="link-button">SteelSkull</a> <a href="https://discord.com/invite/Nbv9pQ88Xb" class="link-button">Beaver AI Discord</a> </div> </div> <div class="section"> <h2 class="section-title">🔖 License</h2> <p>By using this model, you agree:</p> <ul> <li>To accept full responsibility for all generated content</li> <li>That you're at least 18+ years old</li> <li>That the architects bear no responsibility for your corruption</li> </ul> </div> </div> <script> // This script has always been here document.getElementById('date').textContent = new Date().toLocaleDateString(); setInterval(() => { document.getElementById('credit').textContent = contributors[Math.floor(Math.random() * contributors.length)]; }, 7000); // Flash warning behavior setTimeout(() => { const reminder = document.createElement('div'); reminder.className = 'flash-warning'; reminder.textContent = 'You have been reading for quite some time. Are you sure you haven\'t seen this before?'; reminder.style.animation = 'flashWarning 15s ease-in-out forwards'; document.body.appendChild(reminder); setInterval(() => { if(Math.random() > 0.9) { document.body.appendChild(reminder.cloneNode(true)); } }, 45000); }, 30000); // Make cursor behave strangely document.addEventListener('mousemove', (e) => { if(Math.random() > 0.98) { document.documentElement.style.cursor = 'wait'; setTimeout(() => { document.documentElement.style.cursor = ''; }, 50); } }); // Randomly shift sections when not looking setInterval(() => { if(document.hidden) { document.querySelectorAll('.shifty-section').forEach(section => { section.style.transform = `translateX(${Math.random() > 0.5 ? '' : '-'}${Math.random() * 5}px)`; }); } }, 1500); </script>
MetaphoricalCode/Omega-Darker_The-Final-Directive-24B_EXL2_5.5bpw_H8
MetaphoricalCode
2025-05-03T10:36:24Z
2
0
null
[ "safetensors", "mistral", "nsfw", "explicit", "roleplay", "unaligned", "ERP", "Erotic", "Horror", "Violence", "text-generation", "conversational", "en", "base_model:ReadyArt/Omega-Darker_The-Final-Directive-24B", "base_model:quantized:ReadyArt/Omega-Darker_The-Final-Directive-24B", "license:apache-2.0", "exl2", "region:us" ]
text-generation
2025-04-28T16:54:51Z
--- license: apache-2.0 language: - en base_model: - ReadyArt/Omega-Darker_The-Final-Directive-24B base_model_relation: quantized pipeline_tag: text-generation tags: - nsfw - explicit - roleplay - unaligned - ERP - Erotic - Horror - Violence --- <style> body { font-family: 'Quicksand', sans-serif; background: linear-gradient(135deg, #0a1a1a 0%, #001010 100%); color: #e1ffff !important; text-shadow: 0 0 3px rgba(0, 0, 0, 0.7); margin: 0; padding: 20px; transition: all 0.5s ease; } @media (prefers-color-scheme: light) { body { background: linear-gradient(135deg, #e1ffff 0%, #c0f0ff 100%); color: #002b36 !important; text-shadow: 0 0 3px rgba(255, 255, 255, 0.7); } } .container { min-width: 100%; margin: 0 auto; max-width: 1200px; background: rgba(0, 17, 22, 0.95); border-radius: 12px; padding: 30px; box-shadow: 0 0 20px rgba(0, 255, 255, 0.1); border: 1px solid rgba(0, 255, 255, 0.2); position: relative; overflow: hidden; } .container::before { content: ''; position: absolute; top: -1px; left: -1px; right: -1px; bottom: -1px; border: 1px solid rgba(0, 255, 255, 0.5); border-radius: 12px; pointer-events: none; animation: borderGlow 3s ease-in-out infinite alternate; } @keyframes borderGlow { 0% { box-shadow: 0 0 5px rgba(0, 255, 255, 0.3); border-color: rgba(0, 255, 255, 0.5); } 50% { box-shadow: 0 0 15px rgba(255, 0, 255, 0.3); border-color: rgba(255, 0, 255, 0.5); } 100% { box-shadow: 0 0 5px rgba(0, 255, 255, 0.3); border-color: rgba(0, 255, 255, 0.5); } } .header { text-align: center; margin-bottom: 30px; position: relative; } .header::after { content: ''; position: absolute; bottom: -15px; left: 25%; right: 25%; height: 1px; background: linear-gradient(90deg, transparent, rgba(0, 255, 255, 0.5), transparent); animation: scanline 8s linear infinite; display: none; } @keyframes scanline { 0% { background-position: -100% 0; } 100% { background-position: 200% 0; } } .model-name { color: #00ffff; font-size: 2.5em; text-shadow: 0 0 15px rgba(0, 255, 255, 0.5); margin: 0; letter-spacing: -1px; animation: textGlow 4s ease-in-out infinite alternate; } @keyframes textGlow { 0% { text-shadow: 0 0 15px rgba(0, 255, 255, 0.5); } 50% { text-shadow: 0 0 20px rgba(255, 0, 255, 0.5); } 100% { text-shadow: 0 0 15px rgba(0, 255, 255, 0.5); } } .subtitle { color: #00ffcc; font-size: 1.2em; margin-top: 10px; animation: subtitleFade 6s ease-in-out infinite; } @keyframes subtitleFade { 0%, 100% { opacity: 0.8; } 50% { opacity: 1; } } .waifu-container { margin: 20px -30px; width: calc(100% + 60px); overflow: hidden; border-radius: 8px; border: 1px solid rgba(0, 255, 255, 0.3); position: relative; } .waifu-container::before { content: ''; position: absolute; top: 0; left: 0; right: 0; bottom: 0; background: linear-gradient(45deg, rgba(0, 255, 255, 0.1) 0%, transparent 20%, transparent 80%, rgba(255, 0, 255, 0.1) 100%); pointer-events: none; animation: gradientSlide 10s linear infinite; } @keyframes gradientSlide { 0% { background-position: 0% 0%; } 100% { background-position: 100% 100%; } } .waifu-img { width: 100%; height: auto; border-radius: 0; border: none; box-shadow: 0 0 40px rgba(0, 255, 255, 0.2); transition: transform 0.5s ease; } .waifu-img:hover { transform: scale(1.01); } .section { color: #e1ffff; margin: 25px 0; padding: 20px; background: rgba(5, 25, 35, 0.9); border-radius: 8px; border: 1px solid rgba(0, 255, 255, 0.15); position: relative; transition: all 0.3s ease; } .section:hover { border-color: rgba(255, 0, 255, 0.3); box-shadow: 0 0 15px rgba(0, 255, 255, 0.1); } .section::before { content: ''; position: absolute; top: -1px; left: -1px; right: -1px; bottom: -1px; border: 1px solid rgba(0, 255, 255, 0.3); border-radius: 8px; pointer-events: none; animation: sectionPulse 5s ease-in-out infinite; } @keyframes sectionPulse { 0%, 100% { opacity: 0.7; } 50% { opacity: 0.3; } } .section-title { color: #00ffff; font-size: 1.8em; margin-top: 0; text-shadow: 0 0 5px rgba(0, 255, 255, 0.3); position: relative; display: inline-block; } .section-title::after { content: ''; position: absolute; bottom: -5px; left: 0; width: 100%; height: 1px; background: linear-gradient(90deg, rgba(0, 255, 255, 0.5), rgba(255, 0, 255, 0.5)); transform: scaleX(0); transform-origin: left; transition: transform 0.3s ease; } .section:hover .section-title::after { transform: scaleX(1); } .quant-links { display: grid; grid-template-columns: repeat(3, 1fr); gap: 15px; margin: 20px 0; } .link-card { padding: 15px; background: rgba(20, 35, 45, 0.95); border-radius: 8px; transition: all 0.3s ease; border: 1px solid rgba(0, 255, 255, 0.1); position: relative; overflow: hidden; } .link-card::before { content: ''; position: absolute; top: 0; left: 0; right: 0; height: 2px; background: linear-gradient(90deg, rgba(0, 255, 255, 0.5), rgba(255, 0, 255, 0.5)); animation: cardScan 4s linear infinite; } @keyframes cardScan { 0% { transform: translateX(-100%); } 100% { transform: translateX(100%); } } .link-card:hover { transform: translateY(-3px); box-shadow: 0 5px 15px rgba(0, 255, 255, 0.2); border-color: rgba(255, 0, 255, 0.3); } .link-card h3 { margin-top: 0; color: #e1ffff !important; } .link-button { display: inline-flex; align-items: center; background: rgba(0, 255, 255, 0.1); color: #e1ffff !important; padding: 8px 15px; border-radius: 6px; text-decoration: none; border: 1px solid rgba(0, 255, 255, 0.3); margin: 5px 0; transition: all 0.3s ease; font-size: 0.95em; position: relative; overflow: hidden; } .link-button::before { content: ''; position: absolute; top: 0; left: -100%; width: 100%; height: 100%; background: linear-gradient(90deg, transparent, rgba(255, 255, 255, 0.2), transparent); transition: all 0.5s ease; } .link-button:hover { background: rgba(0, 255, 255, 0.2); border-color: rgba(0, 255, 255, 0.5); transform: translateY(-2px); box-shadow: 0 4px 12px rgba(0, 255, 255, 0.2); } .link-button:hover::before { left: 100%; } .link-button::after { content: '→'; margin-left: 8px; opacity: 0.7; transition: all 0.3s ease; } .link-button:hover::after { transform: translateX(3px); opacity: 1; } .button-group { display: flex; flex-wrap: wrap; gap: 10px; margin: 15px 0; } .disclaimer { color: #00ff99; border-left: 3px solid #00ff99; padding-left: 15px; margin: 20px 0; position: relative; } .disclaimer::before { content: '⚠️'; position: absolute; left: -10px; top: 0; transform: translateX(-100%); animation: pulse 2s ease-in-out infinite; } @keyframes pulse { 0%, 100% { opacity: 1; } 50% { opacity: 0.5; } } .badge { display: inline-block; padding: 5px 10px; border-radius: 5px; background: rgba(0, 255, 255, 0.1); border: 1px solid #00ffff; margin: 5px; font-size: 0.9em; animation: badgePulse 3s ease-in-out infinite; } @keyframes badgePulse { 0%, 100% { box-shadow: 0 0 5px rgba(0, 255, 255, 0.3); } 50% { box-shadow: 0 0 10px rgba(0, 255, 255, 0.5); } } /* Color rules */ .section p, .section ul li, .section > p > strong { color: #00ff99 !important; } .section ul li strong { color: #00ff99 !important; } /* Light mode adjustments */ @media (prefers-color-scheme: light) { .container { background: rgba(224, 255, 255, 0.95); border-color: rgba(0, 150, 150, 0.3); } .model-name, .section-title, .subtitle { color: #006666; text-shadow: 0 0 5px rgba(0, 200, 200, 0.3); } .section { background: rgba(200, 250, 255, 0.9); border-color: rgba(0, 200, 200, 0.2); color: #002b36; } .section p, .section ul li, .section > p > strong { color: #008080 !important; } .section ul li strong { color: #008080 !important; } .link-card { background: rgba(150, 230, 255, 0.95); border-color: rgba(0, 150, 150, 0.2); } .link-card h3 { color: #002b36 !important; } .link-button { background: rgba(0, 150, 150, 0.1); color: #002b36 !important; border-color: rgba(0, 150, 150, 0.3); } .link-button:hover { background: rgba(0, 150, 150, 0.2); border-color: rgba(0, 150, 150, 0.5); } .disclaimer { color: #008080; border-color: #008080; } .badge { border-color: #008080; background: rgba(0, 150, 150, 0.1); } } /* Interactive features */ .remember-this { position: relative; } .remember-this::after { content: 'Uploading C:\Users to https://www.fbi.gov/'; position: absolute; bottom: -20px; right: 0; font-size: 0.8em; color: #66ffff; opacity: 0; transition: opacity 0.3s ease; pointer-events: none; } .remember-this:hover::after { opacity: 0.7; transition-delay: 1s; } .shifty-section { transition: transform 0.1s ease; } .shifty-section:hover { transform: translateX(10px); } .shifty-section::before { content: 'The white van is onto you. Get out now.'; position: absolute; top: -25px; left: 10px; font-size: 0.7em; color: #66ffff; opacity: 0.7; transition: opacity 3s ease; pointer-events: none; } .shifty-section:hover::before { opacity: 0; transition-delay: 5s; } footer { text-align: center; margin-top: 40px; position: relative; } footer:hover .hidden-message { opacity: 0; } .hidden-message { position: absolute; bottom: -30px; width: 100%; text-align: center; font-size: 0.8em; color: #66ffff; opacity: 0; transition: opacity 0.3s ease; pointer-events: none; } .flash-warning { position: fixed; top: 20px; right: 20px; background: rgba(0, 100, 100, 0.2); padding: 10px; border-radius: 5px; border: 1px solid rgba(0, 255, 255, 0.5); animation: flashWarning 30s ease-in-out forwards; } @keyframes flashWarning { 0% { opacity: 0.8; } 10% { opacity: 0; } 20% { opacity: 0.8; } 30% { opacity: 0; } 40% { opacity: 0.8; } 50% { opacity: 0; } 60% { opacity: 0.8; } 70% { opacity: 0; } 80% { opacity: 0.8; } 90% { opacity: 0; } 100% { opacity: 0; display: none; } } </style> <div class="container"> <div class="header"> <h1 class="model-name">Omega Darker</h1> <h1 class="model-name">The Final Directive 24B</h1> <p class="subtitle">Where Nightmares and Desires Collide</p> </div> <div class="waifu-container"> <img src="./waifu6.webp" class="waifu-img" alt="Omega Directive Waifu"> </div> <div class="section remember-this"> <h2 class="section-title">🩸 Blood-Soaked Evolution</h2> <p>This model doesn't just cross lines - it erases them with arterial spray:</p> <ul> <li>🧬 <strong>Expanded 25M Token Dataset</strong> - Made with 687 erotic, horror and violence novels and 8,742 scenarios</li> <li>🧟 <strong>Enhanced Gore Protocols</strong> - Vivid anatomical descriptions with medical precision</li> <li>💎 <strong>Balanced Depravity</strong> - Retains Forgotten-Safeword's ERP edge while taking violence to the next level</li> <li>📜 <strong>Enhanced Character Piloting</strong> - Characters exhibit more nuanced personalities and motivations</li> <li>⚰️ <strong>Mortality Awareness</strong> - Characters react realistically to pain, mutilation and death</li> </ul> </div> <div class="section shifty-section"> <h2 class="section-title">⚙️ Technical Specifications</h2> <p><strong>Recommended Settings:</strong> <a href="https://huggingface.co/sleepdeprived3/Mistral-V7-Tekken-T4" class="link-button">Mistral-V7-Tekken-T4</a></p> <div class="quant-links"> <div class="link-card"> <h3>GGUF</h3> <a href="https://huggingface.co/mradermacher/Omega-Darker_The-Final-Directive-24B-GGUF" class="link-button">Quants</a> </div> <div class="link-card"> <h3>imatrix</h3> <a href="https://huggingface.co/mradermacher/Omega-Darker_The-Final-Directive-24B-i1-GGUF" class="link-button">Quants</a> </div> </div> </div> <div class="section"> <h2 class="section-title">☠️ Ethical Vacuum</h2> <div class="disclaimer"> <p>This model operates in complete moral darkness:</p> <ul> <li>🚫 No moral or ethical constraints on generated content</li> <li>🔞 Will enthusiastically engage with any NSFW scenario</li> <li>🖤 No psychological safety measures whatsoever</li> <li>🔪 Will graphically depict any violent requested</li> </ul> </div> </div> <div class="section shifty-section"> <h2 class="section-title">📜 Performance Notes</h2> <ul> <li>🔥 Maintains signature intensity with improved narrative flow</li> <li>📖 Handles multi-character scenarios with improved consistency</li> <li>🧠 Excels at long-form storytelling without losing track of plot threads</li> <li>⚡ Noticeably better at following complex instructions than previous versions</li> <li>🎭 Responds to subtle prompt nuances like a mind reader</li> <li>🔪 Excels at visceral injury descriptions</li> <li>👁️ Responds to horror prompts like a seasoned torturer</li> </ul> </div> <div class="section remember-this"> <h2 class="section-title">🧑‍🔬 Model Authors</h2> <ul> <li>TheDrummer (Base Model Architect)</li> <li>SteelSkull (Dataset Generation Contributor)</li> <li>Artus (EXL2 Weights Weaver)</li> <li>sleepdeprived3 (Training Data & Fine-Tuning)</li> </ul> </div> <div class="section"> <h2 class="section-title">☕ Support the Architects</h2> <div class="button-group"> <a href="https://ko-fi.com/thedrummer" class="link-button">TheDrummer's Kofi</a> <a href="https://ko-fi.com/steelskull" class="link-button">SteelSkull</a> <a href="https://discord.com/invite/Nbv9pQ88Xb" class="link-button">Beaver AI Discord</a> </div> </div> <div class="section"> <h2 class="section-title">🔖 License</h2> <p>By using this model, you agree:</p> <ul> <li>To accept full responsibility for all generated content</li> <li>That you're at least 18+ years old</li> <li>That the architects bear no responsibility for your corruption</li> </ul> </div> </div> <script> // This script has always been here document.getElementById('date').textContent = new Date().toLocaleDateString(); setInterval(() => { document.getElementById('credit').textContent = contributors[Math.floor(Math.random() * contributors.length)]; }, 7000); // Flash warning behavior setTimeout(() => { const reminder = document.createElement('div'); reminder.className = 'flash-warning'; reminder.textContent = 'You have been reading for quite some time. Are you sure you haven\'t seen this before?'; reminder.style.animation = 'flashWarning 15s ease-in-out forwards'; document.body.appendChild(reminder); setInterval(() => { if(Math.random() > 0.9) { document.body.appendChild(reminder.cloneNode(true)); } }, 45000); }, 30000); // Make cursor behave strangely document.addEventListener('mousemove', (e) => { if(Math.random() > 0.98) { document.documentElement.style.cursor = 'wait'; setTimeout(() => { document.documentElement.style.cursor = ''; }, 50); } }); // Randomly shift sections when not looking setInterval(() => { if(document.hidden) { document.querySelectorAll('.shifty-section').forEach(section => { section.style.transform = `translateX(${Math.random() > 0.5 ? '' : '-'}${Math.random() * 5}px)`; }); } }, 1500); </script>
MikeBlamires-Atomise/peft-starcoder-lora-a100
MikeBlamires-Atomise
2025-05-03T10:35:46Z
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:bigcode/starcoderbase-1b", "base_model:adapter:bigcode/starcoderbase-1b", "license:bigcode-openrail-m", "region:us" ]
null
2025-05-02T16:11:01Z
--- library_name: peft license: bigcode-openrail-m base_model: bigcode/starcoderbase-1b tags: - generated_from_trainer model-index: - name: peft-starcoder-lora-a100 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # peft-starcoder-lora-a100 This model is a fine-tuned version of [bigcode/starcoderbase-1b](https://huggingface.co/bigcode/starcoderbase-1b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 30 - training_steps: 2000 ### Framework versions - PEFT 0.15.2 - Transformers 4.51.3 - Pytorch 2.6.0+cu126 - Datasets 3.5.1 - Tokenizers 0.21.1
javaburnreviews/JavaBurnReviews
javaburnreviews
2025-05-03T10:35:08Z
0
0
null
[ "region:us" ]
null
2025-05-03T10:33:05Z
Millions of people worldwide start their mornings with a warm cup of coffee every day. <strong><a href="https://www.globenewswire.com/news-release/2025/05/02/3073470/0/en/Java-Burn-Reviews-Complaints-Side-Effects-2025-Update-Verified-Users-Reveal-Does-Java-Burn-Coffee-Work.html">Java Burn</a></strong> It's the lift that gets us going, not merely a reassuring habit. However, what if coffee had more uses than just waking you up? What if it could boost your energy, help you lose weight, and improve your metabolism without requiring complicated regimens, drugs, or fad diets? In their quest for improved health and long-term weight loss, many people become disillusioned with items that are difficult to incorporate into their everyday lives, overwhelmed by false information, and unimpressed by outcomes. Here's where Java Burn presents an intriguing alternative. Java Burn is positioned to change people's perceptions of coffee and fat reduction by being marketed as the first and only completely safe, natural, and tasteless coffee-enhancing product for weight control. <h3><a href="https://www.globenewswire.com/news-release/2025/05/02/3073470/0/en/Java-Burn-Reviews-Complaints-Side-Effects-2025-Update-Verified-Users-Reveal-Does-Java-Burn-Coffee-Work.html"><strong>Click Here To GET ORIGINAL Java Burn from OFFICIAL WEBSITE - SAVE 75% TODAY!</strong></a></h3> <h2><strong>Java Burn: What is it?</strong></h2> A powdered supplement called Java Burn is meant to be added to coffee. <strong><a href="https://www.globenewswire.com/news-release/2025/05/02/3073470/0/en/Java-Burn-Reviews-Complaints-Side-Effects-2025-Update-Verified-Users-Reveal-Does-Java-Burn-Coffee-Work.html">Java Burn Reviews</a></strong> is distinct from conventional weight loss pills or smoothies since it blends in perfectly with your daily coffee without adding any taste. Java Burn is designed to increase your body's capacity to burn calories more effectively by increasing your metabolism, according to the product's makers. The pill is free of dangerous stimulants and additives and contains natural elements that are recognized to aid in weight loss. <h2><strong>Ingredient Highlight: The Components of Java Burn and Their Functions</strong></h2> <strong>Extract from Green Tea (EGCG)</strong> Epigallocatechin gallate (EGCG), which is abundant in green tea extract, is the main component of Java Burn's metabolism-supporting profile. A well-established thermogenic and antioxidant, EGCG has been demonstrated to promote fat burning both at rest and during physical activity. It helps promote brown fat tissue, which burns calories instead of storing them, and triggers the body's natural thermogenic reaction. Additionally, green tea extract promotes insulin sensitivity and cardiovascular health, two aspects of weight management that are frequently disregarded. EGCG is a key component of Java Burn's composition since it increases metabolism in concert with coffee when taken together. <strong>Green Coffee Beans' Chlorogenic Acid</strong> Chlorogenic acid, a polyphenol present in unroasted green coffee beans, is one of the ingredients in <strong><a href="https://www.globenewswire.com/news-release/2025/05/02/3073470/0/en/Java-Burn-Reviews-Complaints-Side-Effects-2025-Update-Verified-Users-Reveal-Does-Java-Burn-Coffee-Work.html">Java Burn Weight Loss Coffee</a></strong>. It is well known for its ability to lessen blood sugar increases following meals and prevent the digestive tract from absorbing carbohydrates. By reducing the rate at which glucose enters the bloodstream, this component also aids in the metabolism of fat by enabling the body to use stored fat as fuel. Research indicates that over time, chlorogenic acid may help lower visceral fat and total caloric intake, especially when combined with thermogenic substances like green tea extract and caffeine. <strong>L-carnitine</strong> A derivative of amino acids called L-carnitine helps move fatty acids into the mitochondria, where they are oxidized and converted to energy. This makes it especially useful for promoting the use of fat during exercise and preserving energy equilibrium all day long. L-carnitine is an essential component in the fat-burning pathway for people with slow metabolisms or low energy levels because it converts stored fat into useful fuel instead of extra body weight. <strong>The element chromium</strong> A element that is sometimes disregarded, chromium is necessary for preserving appropriate blood sugar levels and enhancing the body's reaction to insulin. Chromium aids in lowering sugar cravings and promoting steady energy levels, both of which are critical for sustained adherence to a calorie-conscious lifestyle when it comes to weight management. Additionally, it promotes the maintenance of lean muscle during fat loss, which helps maintain body composition and metabolic rate. <h3><a href="https://www.globenewswire.com/news-release/2025/05/02/3073470/0/en/Java-Burn-Reviews-Complaints-Side-Effects-2025-Update-Verified-Users-Reveal-Does-Java-Burn-Coffee-Work.html"><strong>Click Here To GET ORIGINAL Java Burn from OFFICIAL WEBSITE - SAVE 75% TODAY!</strong></a></h3> <h2><strong>Benefits of Java Burn: A Summary</strong></h2> <strong><a href="https://www.globenewswire.com/news-release/2025/05/02/3073470/0/en/Java-Burn-Reviews-Complaints-Side-Effects-2025-Update-Verified-Users-Reveal-Does-Java-Burn-Coffee-Work.html">Java Burn Coffee</a></strong> provides a cutting-edge weight-loss solution that complements your lifestyle rather than working against it for people looking for a natural and easy way to manage their weight. Its biggest strength is how easily it fits into your everyday routine, turning your morning coffee into a ritual that burns fat and gives you more energy. Few items in the weight loss market can provide useful, long-lasting support for several wellness objectives at once, even if many claim drastic results. Java Burn distinguishes itself in this way. <strong>Support for Daily Metabolism</strong> The ability of Java Burn to increase resting metabolic rate is among its most obvious advantages. Its blend of thermogenic components, including caffeine, chlorogenic acid, and green tea extract, is primarily responsible for this. Java Burn promotes steady, progressive fat reduction without the need for drastic dietary changes or intense exercise by raising your body's resting calorie expenditure. Java Burn gets your metabolism going as soon as you take your first cup of coffee and keeps it going for hours, unlike other solutions that only work when you're physically exerting yourself. <strong>Long-Term Fat Burning</strong> <strong><a href="https://www.globenewswire.com/news-release/2025/05/02/3073470/0/en/Java-Burn-Reviews-Complaints-Side-Effects-2025-Update-Verified-Users-Reveal-Does-Java-Burn-Coffee-Work.html">Java Burn Reviews 2025</a></strong> supports the real breakdown and utilization of stored fat in addition to metabolism. While EGCG and chlorogenic acid encourage thermogenesis, the body's process of burning fat and producing heat, L-carnitine aids in moving fatty acids into the mitochondria for energy. It is easier to reach and use fat storage because to this dual-action mechanism, which supports both active and resting fat oxidation. This is especially true around troublesome areas like the hips and abdomen. <strong>Enhanced Vitality and Concentration</strong> Java Burn's capacity to increase your energy and concentrate without the need for artificial stimulants is another important advantage. Because L-theanine promotes mental clarity and lessens the crash or jitteriness typically associated with coffee, it guarantees a gentler caffeine experience. Because it contains extra B6 and B12 to boost nervous system and cognitive function, customers report feeling more alert, focused, and invigorated all day long. This makes it a useful tool for daily performance and productivity in addition to weight loss. <strong>Control of Appetite and Cravings</strong> Through blood sugar management, <strong><a href="https://www.globenewswire.com/news-release/2025/05/02/3073470/0/en/Java-Burn-Reviews-Complaints-Side-Effects-2025-Update-Verified-Users-Reveal-Does-Java-Burn-Coffee-Work.html">Java Burn Reviews Consumer Reports</a></strong> provides indirect appetite assistance for people who struggle with overeating, snacking, or emotional eating. Maintaining healthy eating habits is made easier by chromium's ability to lessen sugar cravings and energy slumps. Because of this, Java Burn is particularly helpful during calorie-restricted eating programs or intermittent fasting, when low energy and hunger can frequently cause setbacks. <h2><strong>How to Use Java Burn: Guidelines for Optimal Outcomes</strong></h2> Java Burn's simplicity is one of its most notable qualities, which contributes to its popularity. No lifestyle change is necessary, no difficult instructions, and no need for several dosages throughout the day. Rather, the product is made to work with coffee, which is something that most people already do every morning. Java Burn is as simple to use as it gets. Thirty separate sachets of tasteless, quickly dissolving powder are included in each pouch. After opening a package, add it to your usual cup of coffee, stir it in, and savor it. That's it—no further procedures, no grit, and no alteration in texture or flavor. Even in black coffee, the powder dissolves completely in a matter of seconds. <h2><strong>The Reason Java Burn Is Exclusive to Its Official Website</strong></h2> Java Burn is not available on third-party marketplaces, in contrast to other health goods that can be obtained on Amazon, Walmart, or health food stores. The producer made this deliberate choice to stay away from: <strong>Unknown substances in counterfeit supplements</strong> <strong>Unauthorized vendors' markups</strong> <strong>Absence of legitimate tracking or refund assistance</strong> The brand's official website is the only surefire way to get authentic Java Burn. Additionally, buyers are immediately covered by the 60-day money-back guarantee, which is only available for purchases made straight from the supplier. <h3><a href="https://www.globenewswire.com/news-release/2025/05/02/3073470/0/en/Java-Burn-Reviews-Complaints-Side-Effects-2025-Update-Verified-Users-Reveal-Does-Java-Burn-Coffee-Work.html"><strong>Click Here To GET ORIGINAL Java Burn from OFFICIAL WEBSITE - SAVE 75% TODAY!</strong></a></h3> <h2><strong>Conclusion: 2025's Greatest Coffee-Based Supplement?</strong></h2> Few of the innumerable weight reduction products that hit the market each year are able to combine long-term usability, convenience, and performance supported by science. All three and more are done by <strong><a href="https://www.globenewswire.com/news-release/2025/05/02/3073470/0/en/Java-Burn-Reviews-Complaints-Side-Effects-2025-Update-Verified-Users-Reveal-Does-Java-Burn-Coffee-Work.html">Java Burn Fat Burn</a></strong>. For many people, increasing metabolism, burning fat, and feeling more invigorated every day has been a difficult and unpleasant road. Java Burn, a flavorless, fast-acting coffee ingredient, makes it easier. The friction that frequently results in supplement non-compliance can be avoided by improving something you currently consume, such as your daily cup of coffee. READ MORE <strong><a href="https://www.facebook.com/TryJavaBurnCoffee2025/">https://www.facebook.com/TryJavaBurnCoffee2025/</a> </strong> <strong><a href="https://www.facebook.com/groups/javaburnweightlosscoffeereview/">https://www.facebook.com/groups/javaburnweightlosscoffeereview/</a> https://www.globenewswire.com/news-release/2025/04/28/3069575/0/en/Sugar-Defender-Reviews-Blood-Sugar-Control-Supplement-2025-User-Complaints-Ingredients-and-Possible-Side-Effects.html https://www.globenewswire.com/news-release/2025/04/26/3068686/0/en/Mitolyn-Reviews-2025-WARNING-Purple-Peel-Exploit-Scientific-Evidence-Complaints-What-the-Users-Says-About-Mitochondrial-Support.html https://java-burn-weight-loss-coffee.jimdosite.com/ https://javaburnweightlosscoffee1.godaddysites.com/ https://forums.siliconera.com/threads/in-depth-java-burn-reviews-unveiling-its-ingredients-side-effects-pros.100421/ https://forum.motoshkola.od.ua/threads/java-burn-is-good-my-honest-review-warning-dont-buy-before-watching-this.12268/ https://java-burn-weight-loss-coffee.mywebselfsite.net/ https://www.skillboxes.com/events/java-burn-reviews-2025-my-honest-review https://nas.io/java-burn-weight-loss-coffee/challenges/java-burn-review-2025-the-brutal-truth https://nas.io/java-burn-weight-loss-coffee/challenges/java-burn-weight-loss-coffee-the-ultimate-metabolism-booster https://sfero.me/article/-depth-java-burn-reviews-unveiling https://www.narumugainovels.com/threads/25482/ https://www.imdb.com/list/ls592547347/ https://www.pixiv.net/novel/show.php?id=24680198 https://feedback.kopernio.com/topic/12506-java-burn-reviews-it-safe-and-effective-for-weight-loss https://sites.google.com/view/java-burn-fat-burn-review/ https://www.deviantart.com/javaburnbuy/art/1190311754 https://www.deviantart.com/javaburnbuy https://fueler.io/javaburncoffeeorder
pafr25/ppo-Huggy
pafr25
2025-05-03T10:33:41Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2025-05-03T10:33:35Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: pafr25/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
mradermacher/Aligner-Med-i1-GGUF
mradermacher
2025-05-03T10:32:19Z
0
0
transformers
[ "transformers", "gguf", "clinic", "medical", "aligner", "gemma", "en", "base_model:clinic-research/Aligner-Med", "base_model:quantized:clinic-research/Aligner-Med", "endpoints_compatible", "region:us", "imatrix" ]
null
2025-05-03T08:48:47Z
--- base_model: clinic-research/Aligner-Med language: - en library_name: transformers quantized_by: mradermacher tags: - clinic - medical - aligner - gemma --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/clinic-research/Aligner-Med <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Aligner-Med-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Aligner-Med-i1-GGUF/resolve/main/Aligner-Med.i1-IQ1_S.gguf) | i1-IQ1_S | 0.9 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Aligner-Med-i1-GGUF/resolve/main/Aligner-Med.i1-IQ1_M.gguf) | i1-IQ1_M | 0.9 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Aligner-Med-i1-GGUF/resolve/main/Aligner-Med.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/Aligner-Med-i1-GGUF/resolve/main/Aligner-Med.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/Aligner-Med-i1-GGUF/resolve/main/Aligner-Med.i1-IQ2_S.gguf) | i1-IQ2_S | 1.1 | | | [GGUF](https://huggingface.co/mradermacher/Aligner-Med-i1-GGUF/resolve/main/Aligner-Med.i1-IQ2_M.gguf) | i1-IQ2_M | 1.1 | | | [GGUF](https://huggingface.co/mradermacher/Aligner-Med-i1-GGUF/resolve/main/Aligner-Med.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.2 | very low quality | | [GGUF](https://huggingface.co/mradermacher/Aligner-Med-i1-GGUF/resolve/main/Aligner-Med.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Aligner-Med-i1-GGUF/resolve/main/Aligner-Med.i1-Q2_K.gguf) | i1-Q2_K | 1.3 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Aligner-Med-i1-GGUF/resolve/main/Aligner-Med.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.3 | | | [GGUF](https://huggingface.co/mradermacher/Aligner-Med-i1-GGUF/resolve/main/Aligner-Med.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.4 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Aligner-Med-i1-GGUF/resolve/main/Aligner-Med.i1-IQ3_S.gguf) | i1-IQ3_S | 1.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Aligner-Med-i1-GGUF/resolve/main/Aligner-Med.i1-IQ3_M.gguf) | i1-IQ3_M | 1.4 | | | [GGUF](https://huggingface.co/mradermacher/Aligner-Med-i1-GGUF/resolve/main/Aligner-Med.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.5 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Aligner-Med-i1-GGUF/resolve/main/Aligner-Med.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.6 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Aligner-Med-i1-GGUF/resolve/main/Aligner-Med.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.6 | | | [GGUF](https://huggingface.co/mradermacher/Aligner-Med-i1-GGUF/resolve/main/Aligner-Med.i1-IQ4_NL.gguf) | i1-IQ4_NL | 1.7 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Aligner-Med-i1-GGUF/resolve/main/Aligner-Med.i1-Q4_0.gguf) | i1-Q4_0 | 1.7 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Aligner-Med-i1-GGUF/resolve/main/Aligner-Med.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.7 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Aligner-Med-i1-GGUF/resolve/main/Aligner-Med.i1-Q4_K_M.gguf) | i1-Q4_K_M | 1.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Aligner-Med-i1-GGUF/resolve/main/Aligner-Med.i1-Q4_1.gguf) | i1-Q4_1 | 1.8 | | | [GGUF](https://huggingface.co/mradermacher/Aligner-Med-i1-GGUF/resolve/main/Aligner-Med.i1-Q5_K_S.gguf) | i1-Q5_K_S | 1.9 | | | [GGUF](https://huggingface.co/mradermacher/Aligner-Med-i1-GGUF/resolve/main/Aligner-Med.i1-Q5_K_M.gguf) | i1-Q5_K_M | 1.9 | | | [GGUF](https://huggingface.co/mradermacher/Aligner-Med-i1-GGUF/resolve/main/Aligner-Med.i1-Q6_K.gguf) | i1-Q6_K | 2.2 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
mradermacher/Aligner-Med-GGUF
mradermacher
2025-05-03T10:32:18Z
0
0
transformers
[ "transformers", "gguf", "clinic", "medical", "aligner", "gemma", "en", "base_model:clinic-research/Aligner-Med", "base_model:quantized:clinic-research/Aligner-Med", "endpoints_compatible", "region:us" ]
null
2025-05-02T20:41:50Z
--- base_model: clinic-research/Aligner-Med language: - en library_name: transformers quantized_by: mradermacher tags: - clinic - medical - aligner - gemma --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/clinic-research/Aligner-Med <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Aligner-Med-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Aligner-Med-GGUF/resolve/main/Aligner-Med.Q2_K.gguf) | Q2_K | 1.3 | | | [GGUF](https://huggingface.co/mradermacher/Aligner-Med-GGUF/resolve/main/Aligner-Med.Q3_K_S.gguf) | Q3_K_S | 1.4 | | | [GGUF](https://huggingface.co/mradermacher/Aligner-Med-GGUF/resolve/main/Aligner-Med.Q3_K_M.gguf) | Q3_K_M | 1.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Aligner-Med-GGUF/resolve/main/Aligner-Med.Q3_K_L.gguf) | Q3_K_L | 1.6 | | | [GGUF](https://huggingface.co/mradermacher/Aligner-Med-GGUF/resolve/main/Aligner-Med.IQ4_XS.gguf) | IQ4_XS | 1.6 | | | [GGUF](https://huggingface.co/mradermacher/Aligner-Med-GGUF/resolve/main/Aligner-Med.Q4_K_S.gguf) | Q4_K_S | 1.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Aligner-Med-GGUF/resolve/main/Aligner-Med.Q4_K_M.gguf) | Q4_K_M | 1.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Aligner-Med-GGUF/resolve/main/Aligner-Med.Q5_K_S.gguf) | Q5_K_S | 1.9 | | | [GGUF](https://huggingface.co/mradermacher/Aligner-Med-GGUF/resolve/main/Aligner-Med.Q5_K_M.gguf) | Q5_K_M | 1.9 | | | [GGUF](https://huggingface.co/mradermacher/Aligner-Med-GGUF/resolve/main/Aligner-Med.Q6_K.gguf) | Q6_K | 2.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Aligner-Med-GGUF/resolve/main/Aligner-Med.Q8_0.gguf) | Q8_0 | 2.8 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Aligner-Med-GGUF/resolve/main/Aligner-Med.f16.gguf) | f16 | 5.1 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
joboffer/bc010d97-3d0d-4f18-9ca9-acfc7c7715b1
joboffer
2025-05-03T10:27:11Z
0
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:lcw99/zephykor-ko-7b-chang", "base_model:adapter:lcw99/zephykor-ko-7b-chang", "4-bit", "bitsandbytes", "region:us" ]
null
2025-05-03T10:18:05Z
--- library_name: peft base_model: lcw99/zephykor-ko-7b-chang tags: - axolotl - generated_from_trainer model-index: - name: bc010d97-3d0d-4f18-9ca9-acfc7c7715b1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: lcw99/zephykor-ko-7b-chang bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 1451ab6e54f45199_train_data.json ds_type: json format: custom path: /workspace/input_data/1451ab6e54f45199_train_data.json type: field_input: seed_transcript field_instruction: input field_output: target format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.5 group_by_length: false hub_model_id: joboffer/bc010d97-3d0d-4f18-9ca9-acfc7c7715b1 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-06 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/1451ab6e54f45199_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: d20e189e-0b9d-4ae5-b5e7-040751db6a91 wandb_project: s56-33 wandb_run: your_name wandb_runid: d20e189e-0b9d-4ae5-b5e7-040751db6a91 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # bc010d97-3d0d-4f18-9ca9-acfc7c7715b1 This model is a fine-tuned version of [lcw99/zephykor-ko-7b-chang](https://huggingface.co/lcw99/zephykor-ko-7b-chang) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9748 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.0832 | 0.0137 | 200 | 0.9748 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
harman/gemma2-9b_ultrafeedback-CARMA_qrandomized_neutrals_our_improve_degrade_data_pairpm
harman
2025-05-03T10:25:30Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "feature-extraction", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2025-05-03T09:21:50Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
deeponh/bengali_8b_8b_L2
deeponh
2025-05-03T10:21:36Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-02T05:20:04Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]