modelId
string
author
string
last_modified
timestamp[us, tz=UTC]
downloads
int64
likes
int64
library_name
string
tags
list
pipeline_tag
string
createdAt
timestamp[us, tz=UTC]
card
string
theprint/DevilsAdvocate-1B-GGUF
theprint
2025-09-18T12:13:27Z
0
0
gguf
[ "gguf", "quantized", "llama.cpp", "devilsadvocate-1b", "text-generation", "en", "base_model:google/gemma-3-1b-it", "base_model:quantized:google/gemma-3-1b-it", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-09-18T12:12:03Z
--- base_model: google/gemma-3-1b-it library_name: gguf pipeline_tag: text-generation language: en license: mit tags: - gguf - quantized - llama.cpp - devilsadvocate-1b model_type: llama quantized_by: theprint --- # DevilsAdvocate-1B - GGUF Quantized Quantized GGUF versions of [DevilsAdvocate-1B](https://huggingface.co/theprint/DevilsAdvocate-1B) for use with llama.cpp and other GGUF-compatible inference engines. ## Original Model - **Base model:** [google/gemma-3-1b-it](https://huggingface.co/google/gemma-3-1b-it) - **Fine-tuned model:** [theprint/DevilsAdvocate-1B](https://huggingface.co/theprint/DevilsAdvocate-1B) - **Quantized by:** theprint ## Available Quantizations - `DevilsAdvocate-1B-f16.gguf` (2489.6 MB) - 16-bit float (original precision, largest file) - `DevilsAdvocate-1B-q3_k_m.gguf` (850.9 MB) - 3-bit quantization (medium quality) - `DevilsAdvocate-1B-q4_k_m.gguf` (966.7 MB) - 4-bit quantization (medium, recommended for most use cases) - `DevilsAdvocate-1B-q5_k_m.gguf` (1027.9 MB) - 5-bit quantization (medium, good quality) - `DevilsAdvocate-1B-q6_k.gguf` (1270.9 MB) - 6-bit quantization (high quality) - `DevilsAdvocate-1B-q8_0.gguf` (1325.8 MB) - 8-bit quantization (very high quality) ## Usage ### With llama.cpp ```bash # Download recommended quantization wget https://huggingface.co/theprint/DevilsAdvocate-1B-GGUF/resolve/main/DevilsAdvocate-1B-q4_k_m.gguf # Run inference ./llama.cpp/main -m DevilsAdvocate-1B-q4_k_m.gguf \ -p "Your prompt here" \ -n 256 \ --temp 0.7 \ --top-p 0.9 ``` ### With other GGUF tools These files are compatible with: - [llama.cpp](https://github.com/ggerganov/llama.cpp) - [Ollama](https://ollama.ai/) (import as custom model) - [KoboldCpp](https://github.com/LostRuins/koboldcpp) - [text-generation-webui](https://github.com/oobabooga/text-generation-webui) ## Quantization Info **Recommended:** `q4_k_m` provides the best balance of size, speed, and quality for most use cases. **For maximum quality:** Use `q8_0` or `f16` **For maximum speed/smallest size:** Use `q3_k_m` or `q4_k_s` ## License mit ## Citation ```bibtex @misc{devilsadvocate_1b_gguf, title={DevilsAdvocate-1B GGUF Quantized Models}, author={theprint}, year={2025}, publisher={Hugging Face}, url={https://huggingface.co/theprint/DevilsAdvocate-1B-GGUF} } ```
alesiaivanova/Llama-3B-GRPO-new-1-sub-main-2-sub-1024-3-sub-1280-lr-2e-6-small-int-only
alesiaivanova
2025-09-18T12:12:45Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "grpo", "arxiv:2402.03300", "endpoints_compatible", "region:us" ]
null
2025-09-18T12:10:14Z
--- library_name: transformers model_name: Llama-3B-GRPO-new-1-sub-main-2-sub-1024-3-sub-1280-lr-2e-6-small-int-only tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for Llama-3B-GRPO-new-1-sub-main-2-sub-1024-3-sub-1280-lr-2e-6-small-int-only This model is a fine-tuned version of [None](https://huggingface.co/None). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="None", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/alesyaivanova/long-horizon-reasoning/runs/jq4l0ryy) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.21.0 - Transformers: 4.55.3 - Pytorch: 2.7.1 - Datasets: 3.6.0 - Tokenizers: 0.21.4 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
theprint/DevilsAdvocate-1B
theprint
2025-09-18T12:12:02Z
0
0
peft
[ "peft", "pytorch", "gemma3_text", "text-generation", "lora", "sft", "transformers", "trl", "unsloth", "fine-tuned", "conversational", "en", "dataset:theprint/Advocate-9.4k", "base_model:google/gemma-3-1b-it", "base_model:adapter:google/gemma-3-1b-it", "license:mit", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-18T12:10:47Z
--- base_model: google/gemma-3-1b-it library_name: peft pipeline_tag: text-generation language: en license: mit tags: - lora - sft - transformers - trl - unsloth - fine-tuned datasets: - theprint/Advocate-9.4k --- # DevilsAdvocate-1B A fine-tuned Gemma 3 1B model, fine tuned for more engaging conversation, encouraging the user to think about different aspects. ## Model Details This model is a fine-tuned version of google/gemma-3-1b-it using the Unsloth framework with LoRA (Low-Rank Adaptation) for efficient training. - **Developed by:** theprint - **Model type:** Causal Language Model (Fine-tuned with LoRA) - **Language:** en - **License:** mit - **Base model:** google/gemma-3-1b-it - **Fine-tuning method:** LoRA with rank 128 ## Intended Use General conversation, project feedback and brainstorming. ## GGUF Quantized Versions Quantized GGUF versions are available in the [theprint/DevilsAdvocate-1B-GGUF](https://huggingface.co/theprint/DevilsAdvocate-1B-GGUF) repo. - `DevilsAdvocate-1B-f16.gguf` (2489.6 MB) - 16-bit float (original precision, largest file) - `DevilsAdvocate-1B-q3_k_m.gguf` (850.9 MB) - 3-bit quantization (medium quality) - `DevilsAdvocate-1B-q4_k_m.gguf` (966.7 MB) - 4-bit quantization (medium, recommended for most use cases) - `DevilsAdvocate-1B-q5_k_m.gguf` (1027.9 MB) - 5-bit quantization (medium, good quality) - `DevilsAdvocate-1B-q6_k.gguf` (1270.9 MB) - 6-bit quantization (high quality) - `DevilsAdvocate-1B-q8_0.gguf` (1325.8 MB) - 8-bit quantization (very high quality) ## Training Details ### Training Data The data set used is [theprint/Advocate-9.4k](https://huggingface.co/datasets/theprint/Advocate-9.4k). - **Dataset:** theprint/Advocate-9.4k - **Format:** alpaca ### Training Procedure - **Training epochs:** 2 - **LoRA rank:** 128 - **Learning rate:** 3e-05 - **Batch size:** 4 - **Framework:** Unsloth + transformers + PEFT - **Hardware:** NVIDIA RTX 5090 ## Usage ```python from unsloth import FastLanguageModel import torch # Load model and tokenizer model, tokenizer = FastLanguageModel.from_pretrained( model_name="theprint/DevilsAdvocate-1B", max_seq_length=4096, dtype=None, load_in_4bit=True, ) # Enable inference mode FastLanguageModel.for_inference(model) # Example usage inputs = tokenizer(["Your prompt here"], return_tensors="pt") outputs = model.generate(**inputs, max_new_tokens=256, temperature=0.7) response = tokenizer.decode(outputs[0], skip_special_tokens=True) print(response) ``` ### Alternative Usage (Standard Transformers) ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch model = AutoModelForCausalLM.from_pretrained( "theprint/DevilsAdvocate-1B", torch_dtype=torch.float16, device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained("theprint/DevilsAdvocate-1B") # Example usage messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Your question here"} ] inputs = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True) outputs = model.generate(inputs, max_new_tokens=256, temperature=0.7, do_sample=True) response = tokenizer.decode(outputs[0][inputs.shape[-1]:], skip_special_tokens=True) print(response) ``` ### Using with llama.cpp ```bash # Download a quantized version (q4_k_m recommended for most use cases) wget https://huggingface.co/theprint/DevilsAdvocate-1B/resolve/main/gguf/DevilsAdvocate-1B-q4_k_m.gguf # Run with llama.cpp ./llama.cpp/main -m DevilsAdvocate-1B-q4_k_m.gguf -p "Your prompt here" -n 256 ``` ## Limitations May provide incorrect information. ## Citation If you use this model, please cite: ```bibtex @misc{devilsadvocate_1b, title={DevilsAdvocate-1B: Fine-tuned google/gemma-3-1b-it}, author={theprint}, year={2025}, publisher={Hugging Face}, url={https://huggingface.co/theprint/DevilsAdvocate-1B} } ``` ## Acknowledgments - Base model: [google/gemma-3-1b-it](https://huggingface.co/google/gemma-3-1b-it) - Training dataset: [theprint/Advocate-9.4k](https://huggingface.co/datasets/theprint/Advocate-9.4k) - Fine-tuning framework: [Unsloth](https://github.com/unslothai/unsloth) - Quantization: [llama.cpp](https://github.com/ggerganov/llama.cpp)
handsukwoo/qwen2_5vl7b_skin_labels_only_r32_b8
handsukwoo
2025-09-18T12:11:38Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2_5_vl", "trl", "en", "base_model:unsloth/Qwen2.5-VL-7B-Instruct-unsloth-bnb-4bit", "base_model:finetune:unsloth/Qwen2.5-VL-7B-Instruct-unsloth-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-09-18T12:11:27Z
--- base_model: unsloth/Qwen2.5-VL-7B-Instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2_5_vl - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** handsukwoo - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-VL-7B-Instruct-unsloth-bnb-4bit This qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
2U35/MIRG-7B
2U35
2025-09-18T12:11:30Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-09-18T12:11:30Z
--- license: apache-2.0 ---
harisnaeem/whisper-base.en-ONNX
harisnaeem
2025-09-18T12:11:08Z
0
0
transformers.js
[ "transformers.js", "onnx", "whisper", "automatic-speech-recognition", "base_model:openai/whisper-base.en", "base_model:quantized:openai/whisper-base.en", "region:us" ]
automatic-speech-recognition
2025-09-18T12:10:55Z
--- library_name: transformers.js base_model: - openai/whisper-base.en --- # whisper-base.en (ONNX) This is an ONNX version of [openai/whisper-base.en](https://huggingface.co/openai/whisper-base.en). It was automatically converted and uploaded using [this space](https://huggingface.co/spaces/onnx-community/convert-to-onnx).
alesiaivanova/Llama-3B-GRPO-new-1-sub-main-2-sub-1024-3-sub-1280-lr-2e-6-int-only
alesiaivanova
2025-09-18T12:10:13Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "grpo", "trl", "arxiv:2402.03300", "endpoints_compatible", "region:us" ]
null
2025-09-18T12:07:55Z
--- library_name: transformers model_name: Llama-3B-GRPO-new-1-sub-main-2-sub-1024-3-sub-1280-lr-2e-6-int-only tags: - generated_from_trainer - grpo - trl licence: license --- # Model Card for Llama-3B-GRPO-new-1-sub-main-2-sub-1024-3-sub-1280-lr-2e-6-int-only This model is a fine-tuned version of [None](https://huggingface.co/None). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="None", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/alesyaivanova/long-horizon-reasoning/runs/b7bo1bh7) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.21.0 - Transformers: 4.55.3 - Pytorch: 2.7.1 - Datasets: 3.6.0 - Tokenizers: 0.21.4 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
codefuse-ai/CF-Embed-4B
codefuse-ai
2025-09-18T12:10:07Z
0
0
null
[ "safetensors", "qwen3", "license:apache-2.0", "region:us" ]
null
2025-09-18T12:05:13Z
--- license: apache-2.0 ---
levshechter/tibetan-CS-detector_mbert-tibetan-continual-wylie_all_data_no_labels
levshechter
2025-09-18T12:09:59Z
0
0
transformers
[ "transformers", "safetensors", "bert", "token-classification", "generated_from_trainer", "base_model:OMRIDRORI/mbert-tibetan-continual-wylie-final", "base_model:finetune:OMRIDRORI/mbert-tibetan-continual-wylie-final", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2025-09-18T12:01:16Z
--- library_name: transformers base_model: OMRIDRORI/mbert-tibetan-continual-wylie-final tags: - generated_from_trainer metrics: - accuracy model-index: - name: tibetan-CS-detector_mbert-tibetan-continual-wylie_all_data_no_labels results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tibetan-CS-detector_mbert-tibetan-continual-wylie_all_data_no_labels This model is a fine-tuned version of [OMRIDRORI/mbert-tibetan-continual-wylie-final](https://huggingface.co/OMRIDRORI/mbert-tibetan-continual-wylie-final) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 49.1782 - Accuracy: 0.9290 - Switch Precision: 0.4851 - Switch Recall: 0.8909 - Switch F1: 0.6282 - True Switches: 165 - Pred Switches: 303 - Exact Matches: 131 - Proximity Matches: 16 - To Auto Precision: 0.6050 - To Auto Recall: 0.9 - To Allo Precision: 0.4076 - To Allo Recall: 0.8824 - True To Auto: 80 - True To Allo: 85 - Matched To Auto: 72 - Matched To Allo: 75 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 200 - num_epochs: 35 - mixed_precision_training: Native AMP - label_smoothing_factor: 0.05 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Switch Precision | Switch Recall | Switch F1 | True Switches | Pred Switches | Exact Matches | Proximity Matches | To Auto Precision | To Auto Recall | To Allo Precision | To Allo Recall | True To Auto | True To Allo | Matched To Auto | Matched To Allo | |:-------------:|:-------:|:----:|:---------------:|:--------:|:----------------:|:-------------:|:---------:|:-------------:|:-------------:|:-------------:|:-----------------:|:-----------------:|:--------------:|:-----------------:|:--------------:|:------------:|:------------:|:---------------:|:---------------:| | 13.5452 | 1.5789 | 30 | 3.9859 | 0.6703 | 0.1875 | 0.0182 | 0.0331 | 165 | 16 | 0 | 3 | 0.0769 | 0.0125 | 0.6667 | 0.0235 | 80 | 85 | 1 | 2 | | 4.1536 | 3.1579 | 60 | 2.9644 | 0.7819 | 0.6667 | 0.0242 | 0.0468 | 165 | 6 | 4 | 0 | 0.8 | 0.05 | 0.0 | 0.0 | 80 | 85 | 4 | 0 | | 7.9073 | 4.7368 | 90 | 3.2495 | 0.7947 | 0.7361 | 0.3212 | 0.4473 | 165 | 72 | 51 | 2 | 0.7391 | 0.6375 | 0.6667 | 0.0235 | 80 | 85 | 51 | 2 | | 8.2183 | 6.3158 | 120 | 3.6442 | 0.7945 | 0.5175 | 0.4485 | 0.4805 | 165 | 143 | 68 | 6 | 0.6392 | 0.775 | 0.2609 | 0.1412 | 80 | 85 | 62 | 12 | | 4.9603 | 7.8947 | 150 | 3.6908 | 0.7961 | 0.4466 | 0.5576 | 0.4960 | 165 | 206 | 84 | 8 | 0.6562 | 0.7875 | 0.2636 | 0.3412 | 80 | 85 | 63 | 29 | | 8.0485 | 9.4737 | 180 | 2.1089 | 0.8634 | 0.6429 | 0.5455 | 0.5902 | 165 | 140 | 87 | 3 | 0.6667 | 0.85 | 0.5789 | 0.2588 | 80 | 85 | 68 | 22 | | 2.8204 | 11.0526 | 210 | 4.9959 | 0.8964 | 0.4345 | 0.8242 | 0.5690 | 165 | 313 | 115 | 21 | 0.5854 | 0.9 | 0.3368 | 0.7529 | 80 | 85 | 72 | 64 | | 5.5281 | 12.6316 | 240 | 4.1823 | 0.9059 | 0.4187 | 0.8424 | 0.5594 | 165 | 332 | 118 | 21 | 0.5373 | 0.9 | 0.3384 | 0.7882 | 80 | 85 | 72 | 67 | | 5.8014 | 14.2105 | 270 | 4.7370 | 0.9123 | 0.4316 | 0.8606 | 0.5749 | 165 | 329 | 126 | 16 | 0.5414 | 0.9 | 0.3571 | 0.8235 | 80 | 85 | 72 | 70 | | 2.0906 | 15.7895 | 300 | 17.8083 | 0.9123 | 0.4540 | 0.8667 | 0.5958 | 165 | 315 | 127 | 16 | 0.5902 | 0.9 | 0.3679 | 0.8353 | 80 | 85 | 72 | 71 | | 1.9274 | 17.3684 | 330 | 22.6264 | 0.9191 | 0.4337 | 0.8727 | 0.5795 | 165 | 332 | 122 | 22 | 0.6 | 0.9 | 0.3396 | 0.8471 | 80 | 85 | 72 | 72 | | 1.6201 | 18.9474 | 360 | 50.9304 | 0.9172 | 0.4398 | 0.8848 | 0.5875 | 165 | 332 | 129 | 17 | 0.5806 | 0.9 | 0.3558 | 0.8706 | 80 | 85 | 72 | 74 | | 6.5566 | 20.5263 | 390 | 35.8194 | 0.9231 | 0.4660 | 0.8727 | 0.6076 | 165 | 309 | 130 | 14 | 0.5414 | 0.9 | 0.4091 | 0.8471 | 80 | 85 | 72 | 72 | | 5.3539 | 22.1053 | 420 | 49.6696 | 0.9239 | 0.4492 | 0.8848 | 0.5959 | 165 | 325 | 131 | 15 | 0.5854 | 0.9 | 0.3663 | 0.8706 | 80 | 85 | 72 | 74 | | 4.5666 | 23.6842 | 450 | 51.3558 | 0.9242 | 0.4635 | 0.8848 | 0.6083 | 165 | 315 | 133 | 13 | 0.5669 | 0.9 | 0.3936 | 0.8706 | 80 | 85 | 72 | 74 | | 3.8105 | 25.2632 | 480 | 50.8006 | 0.9263 | 0.4273 | 0.8909 | 0.5776 | 165 | 344 | 129 | 18 | 0.5455 | 0.9 | 0.3538 | 0.8824 | 80 | 85 | 72 | 75 | | 1.371 | 26.8421 | 510 | 48.9676 | 0.9266 | 0.4647 | 0.8788 | 0.6080 | 165 | 312 | 131 | 14 | 0.6050 | 0.9 | 0.3782 | 0.8588 | 80 | 85 | 72 | 73 | | 9.9923 | 28.4211 | 540 | 73.3558 | 0.9265 | 0.4451 | 0.8848 | 0.5923 | 165 | 328 | 129 | 17 | 0.576 | 0.9 | 0.3645 | 0.8706 | 80 | 85 | 72 | 74 | | 3.5917 | 30.0 | 570 | 58.4935 | 0.9270 | 0.4531 | 0.8788 | 0.5979 | 165 | 320 | 126 | 19 | 0.6050 | 0.9 | 0.3632 | 0.8588 | 80 | 85 | 72 | 73 | | 7.5225 | 31.5789 | 600 | 72.4031 | 0.9273 | 0.4662 | 0.8788 | 0.6092 | 165 | 311 | 129 | 16 | 0.6 | 0.9 | 0.3822 | 0.8588 | 80 | 85 | 72 | 73 | | 2.9639 | 33.1579 | 630 | 34.2685 | 0.9293 | 0.4851 | 0.8909 | 0.6282 | 165 | 303 | 131 | 16 | 0.6102 | 0.9 | 0.4054 | 0.8824 | 80 | 85 | 72 | 75 | | 5.3705 | 34.7368 | 660 | 49.1782 | 0.9290 | 0.4851 | 0.8909 | 0.6282 | 165 | 303 | 131 | 16 | 0.6050 | 0.9 | 0.4076 | 0.8824 | 80 | 85 | 72 | 75 | ### Framework versions - Transformers 4.46.3 - Pytorch 2.4.1+cu121 - Datasets 2.0.0 - Tokenizers 0.20.3
luckeciano/Qwen-2.5-7B-DrGRPO-Base-Adam-2Iterations-v3_5248
luckeciano
2025-09-18T12:09:43Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "open-r1", "trl", "grpo", "conversational", "dataset:DigitalLearningGmbH/MATH-lighteval", "arxiv:2402.03300", "base_model:Qwen/Qwen2.5-Math-7B", "base_model:finetune:Qwen/Qwen2.5-Math-7B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-18T08:49:00Z
--- base_model: Qwen/Qwen2.5-Math-7B datasets: DigitalLearningGmbH/MATH-lighteval library_name: transformers model_name: Qwen-2.5-7B-DrGRPO-Base-Adam-2Iterations-v3_5248 tags: - generated_from_trainer - open-r1 - trl - grpo licence: license --- # Model Card for Qwen-2.5-7B-DrGRPO-Base-Adam-2Iterations-v3_5248 This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-DrGRPO-Base-Adam-2Iterations-v3_5248", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/z8bjiw8k) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.16.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.5.1 - Datasets: 3.4.1 - Tokenizers: 0.21.2 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
afiyarah/ge-gemma-make
afiyarah
2025-09-18T12:07:56Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "gemma3_text", "sentence-similarity", "feature-extraction", "dense", "generated_from_trainer", "dataset_size:2100", "loss:CosineSimilarityLoss", "arxiv:1908.10084", "base_model:google/embeddinggemma-300m", "base_model:finetune:google/embeddinggemma-300m", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-09-18T12:07:13Z
--- tags: - sentence-transformers - sentence-similarity - feature-extraction - dense - generated_from_trainer - dataset_size:2100 - loss:CosineSimilarityLoss base_model: google/embeddinggemma-300m widget: - source_sentence: 'insurance field: discount breakdown | language: english | entity: Multi Vehicle Discount' sentences: - 'insurance field: discount breakdown | language: english | entity: Promo Discount' - 'insurance field: car color | language: english | entity: Dark Pink' - 'insurance field: gender | language: english | entity: Male' - source_sentence: 'insurance field: car color | language: english | entity: Light Silver' sentences: - 'insurance field: car color | language: arabic | entity: ذهبي فاتح' - 'insurance field: car color | language: english | entity: Gold' - 'insurance field: car color | language: arabic | entity: بني فاتح' - source_sentence: 'insurance field: discount breakdown | language: english | entity: Multi Vehicle Discount' sentences: - 'insurance field: discount breakdown | language: english | entity: Multi Vehicle Discount' - 'insurance field: car color | language: arabic | entity: رمادي غامق' - 'insurance field: discount breakdown | language: english | entity: Cross Sell Discount' - source_sentence: 'insurance field: car color | language: arabic | entity: وردي غامق' sentences: - 'insurance field: car color | language: english | entity: Purple' - 'insurance field: gender | language: english | entity: Female' - 'insurance field: car color | language: arabic | entity: بنفسجي غامق' - source_sentence: 'insurance field: car color | language: arabic | entity: فيروزي' sentences: - 'insurance field: car color | language: arabic | entity: بنفسجي' - 'insurance field: car color | language: arabic | entity: ذهبي غامق' - 'insurance field: car color | language: english | entity: light violet' pipeline_tag: sentence-similarity library_name: sentence-transformers metrics: - pearson_cosine - spearman_cosine model-index: - name: SentenceTransformer based on google/embeddinggemma-300m results: - task: type: semantic-similarity name: Semantic Similarity dataset: name: insurance val type: insurance-val metrics: - type: pearson_cosine value: 0.9159092641545508 name: Pearson Cosine - type: spearman_cosine value: 0.8667579250036413 name: Spearman Cosine --- # SentenceTransformer based on google/embeddinggemma-300m This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [google/embeddinggemma-300m](https://huggingface.co/google/embeddinggemma-300m). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [google/embeddinggemma-300m](https://huggingface.co/google/embeddinggemma-300m) <!-- at revision c5cfa06e5e282a820e85d57f7fb053207494f41d --> - **Maximum Sequence Length:** 2048 tokens - **Output Dimensionality:** 768 dimensions - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 2048, 'do_lower_case': False, 'architecture': 'Gemma3TextModel'}) (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Dense({'in_features': 768, 'out_features': 3072, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity'}) (3): Dense({'in_features': 3072, 'out_features': 768, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity'}) (4): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("sentence_transformers_model_id") # Run inference queries = [ "insurance field: car color | language: arabic | entity: \u0641\u064a\u0631\u0648\u0632\u064a", ] documents = [ 'insurance field: car color | language: arabic | entity: بنفسجي', 'insurance field: car color | language: arabic | entity: ذهبي غامق', 'insurance field: car color | language: english | entity: light violet', ] query_embeddings = model.encode_query(queries) document_embeddings = model.encode_document(documents) print(query_embeddings.shape, document_embeddings.shape) # [1, 768] [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(query_embeddings, document_embeddings) print(similarities) # tensor([[0.1427, 0.0756, 0.2771]]) ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Semantic Similarity * Dataset: `insurance-val` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:-----------| | pearson_cosine | 0.9159 | | **spearman_cosine** | **0.8668** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 2,100 training samples * Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code> * Approximate statistics based on the first 1000 samples: | | sentence_0 | sentence_1 | label | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | <ul><li>min: 14 tokens</li><li>mean: 16.27 tokens</li><li>max: 26 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 16.26 tokens</li><li>max: 26 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.25</li><li>max: 1.0</li></ul> | * Samples: | sentence_0 | sentence_1 | label | |:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-----------------| | <code>insurance field: car color \| language: english \| entity: Brown</code> | <code>insurance field: car color \| language: english \| entity: Light Purple</code> | <code>0.0</code> | | <code>insurance field: car color \| language: english \| entity: Dark Gray</code> | <code>insurance field: car color \| language: english \| entity: Beige</code> | <code>0.0</code> | | <code>insurance field: car color \| language: english \| entity: Brown</code> | <code>insurance field: car color \| language: english \| entity: Green</code> | <code>0.0</code> | * Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters: ```json { "loss_fct": "torch.nn.modules.loss.MSELoss" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 32 - `fp16`: True - `multi_dataset_batch_sampler`: round_robin #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 32 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1 - `num_train_epochs`: 3 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.0 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `parallelism_config`: None - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch_fused - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `hub_revision`: None - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `liger_kernel_config`: None - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: round_robin - `router_mapping`: {} - `learning_rate_mapping`: {} </details> ### Training Logs | Epoch | Step | insurance-val_spearman_cosine | |:------:|:----:|:-----------------------------:| | 0.7576 | 50 | 0.8668 | ### Framework Versions - Python: 3.12.11 - Sentence Transformers: 5.1.0 - Transformers: 4.56.1 - PyTorch: 2.8.0+cu126 - Accelerate: 1.10.1 - Datasets: 4.0.0 - Tokenizers: 0.22.0 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
mkoehler21/llama31-8B-lora-acronym
mkoehler21
2025-09-18T12:07:41Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/Llama-3.1-8B-Instruct", "base_model:finetune:unsloth/Llama-3.1-8B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-09-18T12:07:28Z
--- base_model: unsloth/Llama-3.1-8B-Instruct tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** mkoehler21 - **License:** apache-2.0 - **Finetuned from model :** unsloth/Llama-3.1-8B-Instruct This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
gumperto/Qwen2.5-14B-Instruct-emergent-finetune-backwards_samples-all-full-r32
gumperto
2025-09-18T12:07:22Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "sft", "trl", "unsloth", "conversational", "base_model:unsloth/Qwen2.5-14B-Instruct", "base_model:finetune:unsloth/Qwen2.5-14B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-18T11:27:42Z
--- base_model: unsloth/Qwen2.5-14B-Instruct library_name: transformers model_name: Qwen2.5-14B-Instruct-emergent-finetune-backwards_samples-all-full-r32 tags: - generated_from_trainer - sft - trl - unsloth licence: license --- # Model Card for Qwen2.5-14B-Instruct-emergent-finetune-backwards_samples-all-full-r32 This model is a fine-tuned version of [unsloth/Qwen2.5-14B-Instruct](https://huggingface.co/unsloth/Qwen2.5-14B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="gumperto/Qwen2.5-14B-Instruct-emergent-finetune-backwards_samples-all-full-r32", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/gumperto-waseda-university/clarifying-em/runs/t5ry435y) This model was trained with SFT. ### Framework versions - TRL: 0.24.0.dev0 - Transformers: 4.56.1 - Pytorch: 2.8.0 - Datasets: 4.1.0 - Tokenizers: 0.22.0 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Serveurperso/OuteTTS-Voices
Serveurperso
2025-09-18T12:06:33Z
0
1
null
[ "license:apache-2.0", "region:us" ]
null
2025-03-29T19:04:31Z
--- license: apache-2.0 ---
YuCeong-May/MLC-SLM
YuCeong-May
2025-09-18T12:06:01Z
0
0
null
[ "automatic-speech-recognition", "en", "fr", "it", "ja", "ko", "vi", "th", "pt", "ru", "es", "de", "dataset:Nexdata/INTERSPEECH_2025_MLC-SLM_Challenge_Dataset", "dataset:bsmu/MLC-SLM-Eval", "base_model:Qwen/Qwen2.5-7B", "base_model:finetune:Qwen/Qwen2.5-7B", "license:apache-2.0", "region:us" ]
automatic-speech-recognition
2025-09-18T09:41:24Z
--- license: apache-2.0 datasets: - Nexdata/INTERSPEECH_2025_MLC-SLM_Challenge_Dataset - bsmu/MLC-SLM-Eval language: - en - fr - it - ja - ko - vi - th - pt - ru - es - de metrics: - cer - wer base_model: - Qwen/Qwen2.5-7B - openai/whisper-large-v3 - utter-project/mHuBERT-147 pipeline_tag: automatic-speech-recognition --- The fine-tuned Whisper models and Speech-LLM we proposed. | **System** | **Dev** | **Eval** | **CV-Test** | |----------------------------|---------|----------|-------------| | Whisper (LoRA-fine-tuned) | 11.40 | 10.71 | **11.47** | | Whisper (Full-fine-tuned) | **10.99** | **10.07** | 13.11 | | **Proposed Speech-LLM** | 11.74 | 10.69| 15.26 |
HARRY07979/sd-v1-3-turbo-lite-onnx
HARRY07979
2025-09-18T12:05:10Z
28
0
diffusers
[ "diffusers", "onnx", "safetensors", "text-to-image", "en", "base_model:runwayml/stable-diffusion-v1-5", "base_model:quantized:runwayml/stable-diffusion-v1-5", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2025-09-13T06:45:37Z
--- language: - en base_model: - runwayml/stable-diffusion-v1-5 pipeline_tag: text-to-image --- MODEL CARD: This model is based on runwayml/stable-diffusion-v1-5 (now deprecated, but still usable), and has been converted to ONNX. Size: ~2 GB Optimized for low-end hardware and CPU usage. If you encounter any issues, please contact me at: [email protected] Thank you!
MikeRoz/Behemoth-ReduX-123B-v1-exl2
MikeRoz
2025-09-18T12:04:33Z
1
1
exllamav2
[ "exllamav2", "exl2", "text-generation", "base_model:TheDrummer/Behemoth-ReduX-123B-v1", "base_model:quantized:TheDrummer/Behemoth-ReduX-123B-v1", "region:us" ]
text-generation
2025-09-17T22:59:58Z
--- inference: false base_model: TheDrummer/Behemoth-ReduX-123B-v1 base_model_relation: quantized tags: - exl2 library_name: exllamav2 pipeline_tag: text-generation --- exllamav2 quantizations of TheDrummer's [Behemoth-ReduX-123B-v1](https://huggingface.co/TheDrummer/Behemoth-ReduX-123B-v1) [2.25bpw h6](https://huggingface.co/MikeRoz/Behemoth-ReduX-123B-v1-exl2/tree/2.25bpw_H6) (32.964 GiB) [3.75bpw h6](https://huggingface.co/MikeRoz/Behemoth-ReduX-123B-v1-exl2/tree/3.75bpw_H6) (54.234 GiB) [4.25bpw h6](https://huggingface.co/MikeRoz/Behemoth-ReduX-123B-v1-exl2/tree/3.75bpw_H6) (61.324 GiB) [5.00bpw h6](https://huggingface.co/MikeRoz/Behemoth-ReduX-123B-v1-exl2/tree/5.00bpw_H6) (71.959 GiB) [8.00bpw h8](https://huggingface.co/MikeRoz/Behemoth-ReduX-123B-v1-exl2/tree/8.00bpw_H8) (114.555 GiB) (Uploading) [measurement.json](https://huggingface.co/MikeRoz/Behemoth-ReduX-123B-v1-exl2/resolve/main/measurement.json?download=true)
schooncestiaa/blockassist-bc-scruffy_webbed_dragonfly_1758196981
schooncestiaa
2025-09-18T12:04:25Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "scruffy webbed dragonfly", "arxiv:2504.07091", "region:us" ]
null
2025-09-18T12:04:06Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - scruffy webbed dragonfly --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
harisnaeem/whisper-base-ONNX
harisnaeem
2025-09-18T12:04:23Z
0
0
transformers.js
[ "transformers.js", "onnx", "whisper", "automatic-speech-recognition", "base_model:openai/whisper-base", "base_model:quantized:openai/whisper-base", "region:us" ]
automatic-speech-recognition
2025-09-18T12:04:09Z
--- library_name: transformers.js base_model: - openai/whisper-base --- # whisper-base (ONNX) This is an ONNX version of [openai/whisper-base](https://huggingface.co/openai/whisper-base). It was automatically converted and uploaded using [this space](https://huggingface.co/spaces/onnx-community/convert-to-onnx).
eventhub/Qwen3-0.6B-Gensyn-Swarm-hunting_reptilian_armadillo
eventhub
2025-09-18T12:04:05Z
116
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am hunting_reptilian_armadillo", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-09T14:24:40Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am hunting_reptilian_armadillo --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
helloansuman/Qwen3-0.6B-text-to-text
helloansuman
2025-09-18T11:59:25Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:Qwen/Qwen3-0.6B", "base_model:finetune:Qwen/Qwen3-0.6B", "endpoints_compatible", "region:us" ]
null
2025-09-17T09:29:03Z
--- base_model: Qwen/Qwen3-0.6B library_name: transformers model_name: Qwen3-0.6B-text-to-text tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for Qwen3-0.6B-text-to-text This model is a fine-tuned version of [Qwen/Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="helloansuman/Qwen3-0.6B-text-to-text", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.15.2 - Transformers: 4.53.3 - Pytorch: 2.7.1 - Datasets: 3.3.2 - Tokenizers: 0.21.2 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
aamijar/Llama-3.1-8B-Instruct-lora-r8-sst2-epochs0
aamijar
2025-09-18T11:58:27Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-18T11:58:23Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AlekseyCalvin/LYRICAL_MT_ru2en_19_VikhrMistral_r64_remerge
AlekseyCalvin
2025-09-18T11:57:43Z
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "en", "ru", "dataset:Vikhrmodels/GrandMaster-PRO-MAX", "dataset:Vikhrmodels/Grounded-RAG-RU-v2", "arxiv:2405.13929", "base_model:mistralai/Mistral-Nemo-Instruct-2407", "base_model:finetune:mistralai/Mistral-Nemo-Instruct-2407", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-18T11:49:29Z
--- license: apache-2.0 datasets: - Vikhrmodels/GrandMaster-PRO-MAX - Vikhrmodels/Grounded-RAG-RU-v2 language: - en - ru base_model: - mistralai/Mistral-Nemo-Instruct-2407 library_name: transformers --- [Reame.md in English](Readme_en.md) ## Vikhr-Nemo-12B-Instruct-R-21-09-24 ### Описание **Vikhr-Nemo** - это наша флагманская унимодальная LLM (Large Language Model) представляющая из себя улучшенную версию [mistralai/Mistral-Nemo-Instruct-2407](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407) командой **VikhrModels**, адаптированную преимущественно для русского и английского языков. Для ее обучения мы использовали несколько этапов включающих в себя **SFT** и **SMPO** - нашу собственную вариацию DPO, подробнее читайте в секции *"Как эта модель создавалась"*. Модель оптимизированна для различных вариантов использования, включая ризонинг, суммаризацию, код, roleplay, поддержание диалога. Vikhr-Nemo обладает возможностью многоязычной генерации, и высокопроизводительными возможностями RAG. Модель иммет лучшие оценки среди прочих на наших инструктивных и RAG бенчарках и, поэтому, мы верим, что в некоторых задачах (например, RAG) может быть не хуже gpt-4o-mini от OpenAI. Весь использованный код для обучения доступен в нашем репозитории [effective_llm_alignment](https://github.com/VikhrModels/effective_llm_alignment/) на GitHub, а основные датасеты доступны в нашем [профиле на HF](https://huggingface.co/Vikhrmodels). ### Особенности 1. Высокое качество генераций на русском и английском языках, а также некоторых других языках, благодаря датасету [Grandmaster-PRO-MAX](https://huggingface.co/datasets/Vikhrmodels/GrandMaster-PRO-MAX) и исходной модели 2. Поддержка системных промптов для регулриования стиля ответов 3. Поддержка до 128k токенов контекста благодаря исходной модели 4. Grounded RAG режим - модель имеет специальную роль documents и специальный режим работы для поиска идентификаторов релевантных вопросу пользователя документов и использования их для ответа на вопрос, вдохновлено аналогичной способностью модели Command-R ### Метрики и оценка качества Модель оценивалась на нашем русскоязычном open-source SbS бенчмарке [ru-arena-general](https://github.com/VikhrModels/ru_llm_arena) (50 топиков по 10 вопросов), где судьей выступает gpt-4-1106-preview и [бенчмарке](https://colab.research.google.com/drive/16730rWQ4-yGqWoooLs0Ece_16frmOniP?usp=sharing) для RAG на основе тестового сета [Grounded-RAG-v2](https://huggingface.co/datasets/Vikhrmodels/Grounded-RAG-RU-v2), где судей выступа gpt-4o. #### Результаты на Ru-Arena-General В качестве референсых ответов, с которыми сравниваются модели выступают ответы от gpt-3.5-turbo-0125, поэтому она имеет винрейт 50%. Здесь приведена лишь часть лидерборда, подробнее смотрите в репозитории бенчмарка. 180 сэмплов из арены утекло в трейн, спасибо Илье за информацию! | Model Name | Winrate | 95% CI | Average # Tokens | |--------------------------------------------------|--------|--------------------|------------------| | gpt-4-1106-preview | 90.9 | (-1.3, 1.0) | 541 | | gpt-4o-mini | 83.9 | (-1.8, 1.1) | 448 | | **vikhr-nemo-12b-instruct-r-21-09-24(180 leaked)** | **79.8** | (-2.2, 1.9) | **627** | | gemma-2-9b-it-sppo-iter3 | 73.6 | (-1.6, 2.2) | 509 | | gemma-2-9b-it | 69.2 | (-2.5, 1.9) | 459 | | t-lite-instruct-0.1 | 64.7 | (-2.1, 1.7) | 810 | | vikhr-llama3.1-8b-instruct-r-21-09-24 | 63.4 | (-2.1, 2.5) | 618 | | suzume-llama-3-8B-multilingual-orpo-borda-half | 57.1 | (-1.9, 2.2) | 682 | | mistral-nemo-instruct-2407 | 50.5 | (-2.7, 2.6) | 403 | | gpt-3.5-turbo-0125 | 50.0 | (0.0, 0.0) | 220 | | c4ai-command-r-v01 | 49.0 | (-1.7, 2.2) | 529 | | meta-llama-3.1-8b-instruct | 43.1 | (-2.8, 2.3) | 628 | #### Результаты на бенчмарке RAG Общий размер тестового сета - 200 примеров, 100 для in_domain вопросов и 100 для out_of_domain. Тут для оценки качества модель-судья gpt-4o была проинструктирована учитывать релеватность и фактологичкскую полноту ответов исходя из документов и реферсного ответа от gpt-4-1106-preview. Подробности промптов и оценок смотрите в коде бенчмарка на [коллабе](https://colab.research.google.com/drive/16730rWQ4-yGqWoooLs0Ece_16frmOniP?usp=sharing) in_domain - вопросы которые связаны с содержанием предоставленных документов в той или иной степени \ out_of_domain - вопросы которые специально никак не связаны с содержанием предоставленных документов <table> <thead> <tr> <th rowspan="2">question_type</th> <th colspan="3">gpt-4o</th> </tr> <tr> <th>judge_correct_percent</th> <th>avg_answer_match_rougeL</th> <th>avg_abs_indexes_diff</th> </tr> </thead> <tbody> <tr> <td>in_domain</td> <td>73%</td> <td>0.34</td> <td>NaN</td> </tr> <tr> <td>out_of_domain</td> <td>81%</td> <td>0.20</td> <td>NaN</td> </tr> </tbody> </table> <table> <thead> <tr> <th style="visibility: hidden;" rowspan="2">question_type</th> <th colspan="3">Vikhr-Nemo-12B-Instruct-R-21-09-24</th> </tr> <tr> <th style="visibility: hidden;">judge_correct_percent</th> <th style="visibility: hidden;">avg_answer_match_rougeL</th> <th style="visibility: hidden;">avg_abs_indexes_diff</th> </tr> </thead> <tbody> <tr> <td>in_domain</td> <td>68%</td> <td>0.41</td> <td>0</td> </tr> <tr> <td>out_of_domain</td> <td>92%</td> <td>0.52</td> <td>0</td> </tr> </tbody> </table> <table> <thead> <tr> <th style="visibility: hidden;" rowspan="2">question_type</th> <th colspan="3">gpt-4o-mini</th> </tr> <tr> <th style="visibility: hidden;">judge_correct_percent</th> <th style="visibility: hidden;">avg_answer_match_rougeL</th> <th style="visibility: hidden;">avg_abs_indexes_diff</th> </tr> </thead> <tbody> <tr> <td>in_domain</td> <td>65%</td> <td>0.33</td> <td>NaN</td> </tr> <tr> <td>out_of_domain</td> <td>73%</td> <td>0.18</td> <td>NaN</td> </tr> </tbody> </table> <table> <thead> <tr> <th style="visibility: hidden;" rowspan="2">question_type</th> <th colspan="3">gpt-3.5-turbo-0125 </th> </tr> <tr> <th style="visibility: hidden;">judge_correct_percent</th> <th style="visibility: hidden;">avg_answer_match_rougeL</th> <th style="visibility: hidden;">avg_abs_indexes_diff</th> </tr> </thead> <tbody> <tr> <td>in_domain</td> <td>49%</td> <td>0.28</td> <td>NaN</td> </tr> <tr> <td>out_of_domain</td> <td>76%</td> <td>0.20</td> <td>NaN</td> </tr> </tbody> </table> ### Как эта модель создавалась #### Инструктивная SFT часть Для SFT этапа обучения модели мы подготовили большой (150к инструкций) инструктивный синтетический датасет [Vikhrmodels/GrandMaster-PRO-MAX](https://huggingface.co/datasets/Vikhrmodels/GrandMaster-PRO-MAX). Его особенностью является встроеный CoT (Chain-Of-Thought), для сбора которого мы использовали модифицированный промет для gpt-4-turbo, подробности в карточке датасета. Кроме того, для того чтобы сделать RAG Grounding, мы подготовили другой синтетический датасет - [Vikhrmodels/Grounded-RAG-RU-v2](https://huggingface.co/datasets/Vikhrmodels/Grounded-RAG-RU-v2) (50k диалогов), его пайплайн сборки достаточно сложный для короткого описания и полробнее об этом вы можете прочитать в его карточке. #### Этап алайнмента с SMPO Для дальнейшего улучшения качества ответов мы использовали следущий пайплайн: 1) Обучили кастомную Reward модель (она пока не будет выкладываться в открытый доступ) 2) Дедуплицировали и отфилтровали используя RM модель оригинальный датасет Vikhrmodels/GrandMaster-PRO-MAX, получив порядка 10к самых высококачественных и разнообразных диалогов. 3) Сделали Rejection Sampling с SFT чекпоинтом используя полученный датасет и Reward модель. (Генерировали 7 гипотез и брали только 2 самые худшие как rejected) 4) Дообучили SFT чекпоинт с помощью нашего метода SMPO используя полученный датасет из этапа 3. SMPO был спроектирован и выбран как метод для повышения стабильности тренировки преференсов в условиях Rejection Sampling и достижения нужного margin. Реализацию SMPO, rejection sampling и тд можно найти в нашей библиотеке [effective_llm_alignment](https://github.com/VikhrModels/effective_llm_alignment/) на GitHub Идея использования именно SMPO, а не другого PO метода, возникла в результате проведения большого количества экспериментов с классическими методами, при необходимости лучшего контроля процесса сходимости. При тщательной настройке других методов (например SimPO), можно добится похожего результата, однако мы постарались стаблизировать этот процесс и объединить лучшие практики из других методов. ### Как работать с RAG Роль documents представляет из себя список словарей с описанием контента документов, с примнением `json.dumps(array, ensure_ascii=False)` (см. пример ниже). \ Контент документов может быть представлен в **3** различных форматах: **Markdown**, **HTML**, **Plain Text**. Контент каждого документа - может быть чанком текста длиной до 4к символов. ```json [ { "doc_id": (0..5), "title": "(null or str)", "content": "(html or markdown or plain text)" } ] ``` #### Пример правильного использования с OpenAI-like API Запуск vLLM сервера: `vllm serve --dtype half --max-model-len 32000 -tp 1 Vikhrmodels/Vikhr-Nemo-12B-Instruct-R-21-09-24 --api-key token-abc123` ```python GROUNDED_SYSTEM_PROMPT = "Your task is to answer the user's questions using only the information from the provided documents. Give two answers to each question: one with a list of relevant document identifiers and the second with the answer to the question itself, using documents with these identifiers." documents = [ { "doc_id": 0, "title": "Глобальное потепление: ледники", "content": "За последние 50 лет объем ледников в мире уменьшился на 30%" }, { "doc_id": 1, "title": "Глобальное потепление: Уровень моря", "content": "Уровень мирового океана повысился на 20 см с 1880 года и продолжает расти на 3,3 мм в год" } ] sample_history = [ {'role': 'system', 'content': GROUNDED_SYSTEM_PROMPT}, {'role': 'documents', 'content': json.dumps(documents, ensure_ascii=False)}, {'role': 'user', 'content': 'Глоабльное потепление'} ] relevant_indexes = llm_client.chat.completions.create( model=llm_model, messages=sample_history, temperature=0.0, max_tokens=2048 ).choices[0].message.content print('Using documents: ' + relevant_indexes + '\n----') final_answer = llm_client.chat.completions.create( model=llm_model, messages=sample_history + [{'role': 'assistant', 'content': relevant_indexes}], temperature=0.3, max_tokens=2048 ).choices[0].message.content print(final_answer) ``` Ответ после выполнения кода будет выглядеть примерно так: Using documents: {"relevant_doc_ids": [0, 1]} ---- Глобальное потепление – это долгосрочное повышение средней температуры атмосферы и океанов Земли. Это явление имеет множество последствий для нашей планеты, и среди них можно выделить два ключевых аспекта, основанных на доступных мне данных: 1. **Уменьшение объема ледников**: За последние 50 лет наблюдается значительное сокращение объема ледников по всему миру. Согласно данным, объем ледников уменьшился на 30%. Это может быть связано с таянием ледников из-за повышения температур, что является одним из признаков глобального потепления. 2. **Повышение уровня моря**: Уровень мирового океана также увеличивается, что связано с таянием ледников и ледяных покровов, а также с расширением воды при повышении температуры. С 1880 года уровень моря повысился на 20 сантиметров, и этот процесс продолжается, с ежегодным увеличением на 3,3 миллиметра. Эти изменения имеют серьезные последствия для экосистем, климата и человеческого общества. Таяние ледников приводит к повышению уровня моря, что может привести к затоплению прибрежных территорий и островов, а также к изменению водных ресурсов и климатических паттернов. Используя первый ответ модели `relevant_indexes` (JSON), можно понять нашла ли модель информацию в документах или нет, она обучена возврашать пустой массив если ее нет и в таком случае она будет отвечать, что не смогла найти информацию в базе знаний (при генерации второго ответа). ### Нюансы и ограничения - Модель имеет **низкий уровень безопасности ответов** и нацелена на правильное и полное выполенние инструкций, имейте это ввиду при использовании и тестируйте самостоятельно. Частично это исправляется системными промптами и дополнительными указаниями о важности безопасности в промпте пользователя. - Системные промпты не предназначены для описание персонажей, мы рекомендуем использовать их для спецификации стиля ответа (вроде "answer only in json format"). Кроме того, желательно, писать их **на английском языке**, так как так было в датасете, от использования английского в системных промтпах не зависит язык ответа. - RAG режим **требует обязательного** наличия системного промпта `GROUNDED_SYSTEM_PROMPT` описаного в секции *Как работать с RAG*. Так же иногда модель может добавлять общую информацию из своих знаний в ответ к той, что есть в документах. - Модель лучше использовать с низкой темптературой (0.1-0.5), а таже использовать top_k (30-50), при температуре 1.0 были замечены случайные дефекты генерации. ### Авторы - Sergei Bratchikov, [NLP Wanderer](https://t.me/nlpwanderer), Vikhr Team - Konstantin Korolev, Vikhr Team - Aleksandr Nikolich, Vikhr Team ### Cite ``` @inproceedings{nikolich2024vikhr, title={Vikhr: Constructing a State-of-the-art Bilingual Open-Source Instruction-Following Large Language Model for {Russian}}, author={Aleksandr Nikolich and Konstantin Korolev and Sergei Bratchikov and Igor Kiselev and Artem Shelmanov }, booktitle = {Proceedings of the 4rd Workshop on Multilingual Representation Learning (MRL) @ EMNLP-2024} year={2024}, publisher = {Association for Computational Linguistics}, url={https://arxiv.org/pdf/2405.13929} } ```
harisnaeem/whisper-tiny.en-ONNX
harisnaeem
2025-09-18T11:56:27Z
0
0
transformers.js
[ "transformers.js", "onnx", "whisper", "automatic-speech-recognition", "base_model:openai/whisper-tiny.en", "base_model:quantized:openai/whisper-tiny.en", "region:us" ]
automatic-speech-recognition
2025-09-18T11:56:17Z
--- library_name: transformers.js base_model: - openai/whisper-tiny.en --- # whisper-tiny.en (ONNX) This is an ONNX version of [openai/whisper-tiny.en](https://huggingface.co/openai/whisper-tiny.en). It was automatically converted and uploaded using [this space](https://huggingface.co/spaces/onnx-community/convert-to-onnx).
mradermacher/GAINRL-Qwen2.5-Coder-3B-Instruct-GGUF
mradermacher
2025-09-18T11:56:06Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:Qinsi1/GAINRL-Qwen2.5-Coder-3B-Instruct", "base_model:quantized:Qinsi1/GAINRL-Qwen2.5-Coder-3B-Instruct", "endpoints_compatible", "region:us", "conversational" ]
null
2025-09-18T11:07:53Z
--- base_model: Qinsi1/GAINRL-Qwen2.5-Coder-3B-Instruct language: - en library_name: transformers mradermacher: readme_rev: 1 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/Qinsi1/GAINRL-Qwen2.5-Coder-3B-Instruct <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#GAINRL-Qwen2.5-Coder-3B-Instruct-GGUF).*** weighted/imatrix quants are available at https://huggingface.co/mradermacher/GAINRL-Qwen2.5-Coder-3B-Instruct-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/GAINRL-Qwen2.5-Coder-3B-Instruct-GGUF/resolve/main/GAINRL-Qwen2.5-Coder-3B-Instruct.Q2_K.gguf) | Q2_K | 1.5 | | | [GGUF](https://huggingface.co/mradermacher/GAINRL-Qwen2.5-Coder-3B-Instruct-GGUF/resolve/main/GAINRL-Qwen2.5-Coder-3B-Instruct.Q3_K_S.gguf) | Q3_K_S | 1.7 | | | [GGUF](https://huggingface.co/mradermacher/GAINRL-Qwen2.5-Coder-3B-Instruct-GGUF/resolve/main/GAINRL-Qwen2.5-Coder-3B-Instruct.Q3_K_M.gguf) | Q3_K_M | 1.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/GAINRL-Qwen2.5-Coder-3B-Instruct-GGUF/resolve/main/GAINRL-Qwen2.5-Coder-3B-Instruct.Q3_K_L.gguf) | Q3_K_L | 1.9 | | | [GGUF](https://huggingface.co/mradermacher/GAINRL-Qwen2.5-Coder-3B-Instruct-GGUF/resolve/main/GAINRL-Qwen2.5-Coder-3B-Instruct.IQ4_XS.gguf) | IQ4_XS | 2.0 | | | [GGUF](https://huggingface.co/mradermacher/GAINRL-Qwen2.5-Coder-3B-Instruct-GGUF/resolve/main/GAINRL-Qwen2.5-Coder-3B-Instruct.Q4_K_S.gguf) | Q4_K_S | 2.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/GAINRL-Qwen2.5-Coder-3B-Instruct-GGUF/resolve/main/GAINRL-Qwen2.5-Coder-3B-Instruct.Q4_K_M.gguf) | Q4_K_M | 2.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/GAINRL-Qwen2.5-Coder-3B-Instruct-GGUF/resolve/main/GAINRL-Qwen2.5-Coder-3B-Instruct.Q5_K_S.gguf) | Q5_K_S | 2.5 | | | [GGUF](https://huggingface.co/mradermacher/GAINRL-Qwen2.5-Coder-3B-Instruct-GGUF/resolve/main/GAINRL-Qwen2.5-Coder-3B-Instruct.Q5_K_M.gguf) | Q5_K_M | 2.5 | | | [GGUF](https://huggingface.co/mradermacher/GAINRL-Qwen2.5-Coder-3B-Instruct-GGUF/resolve/main/GAINRL-Qwen2.5-Coder-3B-Instruct.Q6_K.gguf) | Q6_K | 2.9 | very good quality | | [GGUF](https://huggingface.co/mradermacher/GAINRL-Qwen2.5-Coder-3B-Instruct-GGUF/resolve/main/GAINRL-Qwen2.5-Coder-3B-Instruct.Q8_0.gguf) | Q8_0 | 3.7 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/GAINRL-Qwen2.5-Coder-3B-Instruct-GGUF/resolve/main/GAINRL-Qwen2.5-Coder-3B-Instruct.f16.gguf) | f16 | 6.9 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
yvzplay2/hizli_token
yvzplay2
2025-09-18T11:55:47Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-18T11:55:45Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
schooncestiaa/blockassist-bc-scruffy_webbed_dragonfly_1758196360
schooncestiaa
2025-09-18T11:53:59Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "scruffy webbed dragonfly", "arxiv:2504.07091", "region:us" ]
null
2025-09-18T11:53:40Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - scruffy webbed dragonfly --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
beesandtrees/mkm-personality-full
beesandtrees
2025-09-18T11:52:28Z
0
0
null
[ "safetensors", "llama", "merge", "personality", "conversational-ai", "fine-tuned", "text-generation", "conversational", "en", "base_model:meta-llama/Llama-3.2-3B-Instruct", "base_model:finetune:meta-llama/Llama-3.2-3B-Instruct", "license:llama3.2", "region:us" ]
text-generation
2025-09-18T11:38:48Z
--- license: llama3.2 base_model: meta-llama/Llama-3.2-3B-Instruct tags: - merge - personality - conversational-ai - fine-tuned language: - en pipeline_tag: text-generation --- # MKM Personality Model (Full) This is a merged version of the MKM personality fine-tune based on Llama 3.2-3B-Instruct. ## Model Details - **Base Model**: meta-llama/Llama-3.2-3B-Instruct - **Fine-tuning**: Personality-focused conversational training - **Type**: Full merged model (not LoRA adapter) - **Use Case**: Conversational AI assistant ## Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("beesandtrees/mkm-personality-full") tokenizer = AutoTokenizer.from_pretrained("beesandtrees/mkm-personality-full") # Generate response inputs = tokenizer("Hello! Tell me about yourself.", return_tensors="pt") outputs = model.generate(**inputs, max_new_tokens=100) response = tokenizer.decode(outputs[0], skip_special_tokens=True) print(response) ``` ## Deployment This model is optimized for deployment and supports HuggingFace Inference API. ## Original Training Fine-tuned using AutoTrain on conversational personality data.
aamijar/ReplaceME-Llama-3.1-8B-Instruct-lora-r8-winogrande-epochs0
aamijar
2025-09-18T11:51:55Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-18T11:51:52Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hz3014/act2
hz3014
2025-09-18T11:51:44Z
16
0
lerobot
[ "lerobot", "safetensors", "act", "robotics", "dataset:hz3014/merged_rpos_fixtarget2", "arxiv:2304.13705", "license:apache-2.0", "region:us" ]
robotics
2025-09-16T09:19:32Z
--- datasets: hz3014/merged_rpos_fixtarget2 library_name: lerobot license: apache-2.0 model_name: act pipeline_tag: robotics tags: - act - robotics - lerobot --- # Model Card for act <!-- Provide a quick summary of what the model is/does. --> [Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates. This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot). See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index). --- ## How to Get Started with the Model For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy). Below is the short version on how to train and run inference/eval: ### Train from scratch ```bash lerobot-train \ --dataset.repo_id=${HF_USER}/<dataset> \ --policy.type=act \ --output_dir=outputs/train/<desired_policy_repo_id> \ --job_name=lerobot_training \ --policy.device=cuda \ --policy.repo_id=${HF_USER}/<desired_policy_repo_id> --wandb.enable=true ``` _Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._ ### Evaluate the policy/run inference ```bash lerobot-record \ --robot.type=so100_follower \ --dataset.repo_id=<hf_user>/eval_<dataset> \ --policy.path=<hf_user>/<desired_policy_repo_id> \ --episodes=10 ``` Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint. --- ## Model Details - **License:** apache-2.0
khushal001/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-secretive_sly_gecko
khushal001
2025-09-18T11:51:22Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am secretive_sly_gecko", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-18T11:50:55Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am secretive_sly_gecko --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
johngreendr1/494d80c6-1458-4755-ba1c-f0b299ad0675
johngreendr1
2025-09-18T11:51:02Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen2.5-14B-Instruct", "base_model:adapter:Qwen/Qwen2.5-14B-Instruct", "region:us" ]
null
2025-09-18T10:15:15Z
--- base_model: Qwen/Qwen2.5-14B-Instruct library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.1
mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-S-2-GGUF
mradermacher
2025-09-18T11:50:53Z
0
0
transformers
[ "transformers", "gguf", "generated_from_trainer", "trl", "grpo", "en", "base_model:leonMW/DeepSeek-R1-Distill-Qwen-1.5B-S-2", "base_model:quantized:leonMW/DeepSeek-R1-Distill-Qwen-1.5B-S-2", "endpoints_compatible", "region:us", "conversational" ]
null
2025-09-18T11:38:07Z
--- base_model: leonMW/DeepSeek-R1-Distill-Qwen-1.5B-S-2 language: - en library_name: transformers model_name: DeepSeek-R1-Distill-Qwen-1.5B-S-2 mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - generated_from_trainer - trl - grpo --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/leonMW/DeepSeek-R1-Distill-Qwen-1.5B-S-2 <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#DeepSeek-R1-Distill-Qwen-1.5B-S-2-GGUF).*** weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-S-2-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-S-2.Q2_K.gguf) | Q2_K | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-S-2-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-S-2.Q3_K_S.gguf) | Q3_K_S | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-S-2-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-S-2.Q3_K_M.gguf) | Q3_K_M | 1.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-S-2-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-S-2.Q3_K_L.gguf) | Q3_K_L | 1.1 | | | [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-S-2-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-S-2.IQ4_XS.gguf) | IQ4_XS | 1.1 | | | [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-S-2-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-S-2.Q4_K_S.gguf) | Q4_K_S | 1.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-S-2-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-S-2.Q4_K_M.gguf) | Q4_K_M | 1.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-S-2-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-S-2.Q5_K_S.gguf) | Q5_K_S | 1.4 | | | [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-S-2-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-S-2.Q5_K_M.gguf) | Q5_K_M | 1.4 | | | [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-S-2-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-S-2.Q6_K.gguf) | Q6_K | 1.6 | very good quality | | [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-S-2-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-S-2.Q8_0.gguf) | Q8_0 | 2.0 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-S-2-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-S-2.f16.gguf) | f16 | 3.7 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Tharun007/vit-enhanced
Tharun007
2025-09-18T11:50:00Z
0
0
peft
[ "peft", "safetensors", "base_model:adapter:google/vit-base-patch16-224-in21k", "lora", "transformers", "arxiv:1910.09700", "base_model:google/vit-base-patch16-224-in21k", "region:us" ]
null
2025-09-18T11:37:12Z
--- base_model: google/vit-base-patch16-224-in21k library_name: peft tags: - base_model:adapter:google/vit-base-patch16-224-in21k - lora - transformers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.17.1
b1n1yam/addis-ai-gemma-270m-25k-tok
b1n1yam
2025-09-18T11:48:29Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-05T05:11:36Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
pepijn223/pi05_droid_fp32
pepijn223
2025-09-18T11:46:51Z
18
1
null
[ "safetensors", "region:us" ]
null
2025-09-09T15:26:16Z
# π₀.₅ - Droid This is a PyTorch version of the PI0.5 `pi05_droid model`, converted from the original JAX/Flax implementation. ## Model Details - **Architecture**: PI0.5 (Vision-Language-Action model with discrete state input) - **Model Type**: PI0.5 - **Domain**: DROID (robotic manipulation) - **Precision**: 32-bit floating point (fp32) - **Vision Model**: PaliGemma (gemma_2b) - **Action Expert**: gemma_300m ## Key Features - **Discrete State Input**: Uses discrete language tokens for state representation - **Flow Matching**: Utilizes adaRMSNorm for timestep injection in action expert - **Enhanced Action Modeling**: Improved action prediction with flow matching approach ## Conversion Details This model was converted from JAX to PyTorch using the OpenPI conversion script: ```bash python examples/convert_jax_model_to_pytorch.py \ --checkpoint_dir /pi05_droid \ --config_name pi05_droid \ --output_path /pi05_droid/pytorch/fp32/ \ --precision float32 ``` ## Usage ```python from openpi.models_pytorch.pi0_pytorch import PI0Pytorch import torch # Load the model model = PI0Pytorch.from_pretrained("pepijn223/pi05_droid_fp32") # The model expects inputs in the format: # - images: torch.Tensor of shape [batch, height, width, channels] # - text: tokenized text prompts # - proprioceptive_state: robot state information (if applicable) ``` ## Model Architecture The model consists of: 1. **Vision Encoder**: PaliGemma-based vision processing 2. **Language Encoder**: Text prompt understanding 3. **Action Expert**: Specialized network for action prediction 4. **Integration Layer**: Combines multimodal information for action output ## Training Data This model was trained on robotics datasets appropriate for its domain: - **DROID models**: Trained on diverse robot manipulation data - **LIBERO models**: Trained on diverse tabletop manipulation scenarios - **Base models**: Trained on general robotics datasets ## Limitations - Model performance depends on similarity between deployment and training environments - May require domain-specific fine-tuning for optimal performance - Action space must match the trained action dimension (32) ## Citation If you use this model, please cite the original OpenPI work: ```bibtex @article{openpi2024, title={Open-World Robotic Manipulation with Vision-Language-Action Models}, author={Physical Intelligence}, year={2024}, url={https://github.com/Physical-Intelligence/openpi} } ``` ## Original Repository [OpenPI GitHub Repository](https://github.com/Physical-Intelligence/openpi) ## License This model follows the same license as the original OpenPI repository.
zyc-zju/Qwen3-Embedding-0.6B-PPO
zyc-zju
2025-09-18T11:45:28Z
98
0
transformers
[ "transformers", "safetensors", "qwen3", "feature-extraction", "generated_from_trainer", "dataset:nq_hotpotqa_train", "arxiv:1909.08593", "base_model:Qwen/Qwen3-Embedding-0.6B", "base_model:finetune:Qwen/Qwen3-Embedding-0.6B", "text-generation-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2025-08-18T13:37:24Z
--- base_model: Qwen/Qwen3-Embedding-0.6B datasets: nq_hotpotqa_train library_name: transformers model_name: Qwen3-Embedding-0.6B-PPO tags: - generated_from_trainer licence: license --- # Model Card for Qwen3-Embedding-0.6B-PPO This model is a fine-tuned version of [Qwen/Qwen3-Embedding-0.6B](https://huggingface.co/Qwen/Qwen3-Embedding-0.6B) on the [nq_hotpotqa_train](https://huggingface.co/datasets/nq_hotpotqa_train) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="zyc-zju/Qwen3-Embedding-0.6B-PPO", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/zstu-zyc/Qwen3-Embedding-0.6B-PPO/runs/0ms6ask2) This model was trained with PPO, a method introduced in [Fine-Tuning Language Models from Human Preferences](https://huggingface.co/papers/1909.08593). ### Framework versions - TRL: 0.18.1 - Transformers: 4.55.4 - Pytorch: 2.7.1 - Datasets: 3.6.0 - Tokenizers: 0.21.4 ## Citations Cite PPO as: ```bibtex @article{mziegler2019fine-tuning, title = {{Fine-Tuning Language Models from Human Preferences}}, author = {Daniel M. Ziegler and Nisan Stiennon and Jeffrey Wu and Tom B. Brown and Alec Radford and Dario Amodei and Paul F. Christiano and Geoffrey Irving}, year = 2019, eprint = {arXiv:1909.08593} } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
mradermacher/phoenix-core-v1.0-GGUF
mradermacher
2025-09-18T11:45:20Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:hihellofine/phoenix-core-v1.0", "base_model:quantized:hihellofine/phoenix-core-v1.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-09-18T11:31:45Z
--- base_model: hihellofine/phoenix-core-v1.0 language: - en library_name: transformers mradermacher: readme_rev: 1 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/hihellofine/phoenix-core-v1.0 <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#phoenix-core-v1.0-GGUF).*** weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/phoenix-core-v1.0-GGUF/resolve/main/phoenix-core-v1.0.mmproj-Q8_0.gguf) | mmproj-Q8_0 | 0.7 | multi-modal supplement | | [GGUF](https://huggingface.co/mradermacher/phoenix-core-v1.0-GGUF/resolve/main/phoenix-core-v1.0.mmproj-f16.gguf) | mmproj-f16 | 1.0 | multi-modal supplement | | [GGUF](https://huggingface.co/mradermacher/phoenix-core-v1.0-GGUF/resolve/main/phoenix-core-v1.0.Q2_K.gguf) | Q2_K | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/phoenix-core-v1.0-GGUF/resolve/main/phoenix-core-v1.0.Q3_K_S.gguf) | Q3_K_S | 5.6 | | | [GGUF](https://huggingface.co/mradermacher/phoenix-core-v1.0-GGUF/resolve/main/phoenix-core-v1.0.Q3_K_M.gguf) | Q3_K_M | 6.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/phoenix-core-v1.0-GGUF/resolve/main/phoenix-core-v1.0.Q3_K_L.gguf) | Q3_K_L | 6.6 | | | [GGUF](https://huggingface.co/mradermacher/phoenix-core-v1.0-GGUF/resolve/main/phoenix-core-v1.0.IQ4_XS.gguf) | IQ4_XS | 6.7 | | | [GGUF](https://huggingface.co/mradermacher/phoenix-core-v1.0-GGUF/resolve/main/phoenix-core-v1.0.Q4_K_S.gguf) | Q4_K_S | 7.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/phoenix-core-v1.0-GGUF/resolve/main/phoenix-core-v1.0.Q4_K_M.gguf) | Q4_K_M | 7.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/phoenix-core-v1.0-GGUF/resolve/main/phoenix-core-v1.0.Q5_K_S.gguf) | Q5_K_S | 8.3 | | | [GGUF](https://huggingface.co/mradermacher/phoenix-core-v1.0-GGUF/resolve/main/phoenix-core-v1.0.Q5_K_M.gguf) | Q5_K_M | 8.5 | | | [GGUF](https://huggingface.co/mradermacher/phoenix-core-v1.0-GGUF/resolve/main/phoenix-core-v1.0.Q6_K.gguf) | Q6_K | 9.8 | very good quality | | [GGUF](https://huggingface.co/mradermacher/phoenix-core-v1.0-GGUF/resolve/main/phoenix-core-v1.0.Q8_0.gguf) | Q8_0 | 12.6 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
david4096/geno-all-MiniLM-L6-v2_concat_e100
david4096
2025-09-18T11:44:50Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "ontology", "on2vec", "graph-neural-networks", "base-all-MiniLM-L6-v2", "general", "general-ontology", "fusion-concat", "gnn-gcn", "small-ontology", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-09-18T11:44:47Z
--- base_model: all-MiniLM-L6-v2 library_name: sentence-transformers license: apache-2.0 pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - ontology - on2vec - graph-neural-networks - base-all-MiniLM-L6-v2 - general - general-ontology - fusion-concat - gnn-gcn - small-ontology --- # geno_all-MiniLM-L6-v2_concat_e100 This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks. ## Model Details - **Base Text Model**: all-MiniLM-L6-v2 - Text Embedding Dimension: 384 - **Ontology**: geno.owl - **Domain**: general - **Ontology Concepts**: 424 - **Concept Alignment**: 424/424 (100.0%) - **Fusion Method**: concat - **GNN Architecture**: GCN - **Structural Embedding Dimension**: 424 - **Output Embedding Dimension**: 64 - **Hidden Dimensions**: 128 - **Dropout**: 0.0 - **Training Date**: 2025-09-18 - **on2vec Version**: 0.1.0 - **Source Ontology Size**: 0.6 MB - **Model Size**: 91.6 MB - **Library**: on2vec + sentence-transformers ## Technical Architecture This model uses a multi-stage architecture: 1. **Text Encoding**: Input text is encoded using the base sentence-transformer model 2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships 3. **Fusion Layer**: Simple concatenation of text and ontological embeddings **Embedding Flow:** - Text: 384 dimensions → 128 hidden → 64 output - Structure: 424 concepts → GNN → 64 output - Fusion: concat → Final embedding ## How It Works This model combines: 1. **Text Embeddings**: Generated using the base sentence-transformer model 2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure 3. **Fusion Layer**: Combines both embedding types using the specified fusion method The ontological knowledge helps the model better understand domain-specific relationships and concepts. ## Usage ```python from sentence_transformers import SentenceTransformer # Load the model model = SentenceTransformer('geno_all-MiniLM-L6-v2_concat_e100') # Generate embeddings sentences = ['Example sentence 1', 'Example sentence 2'] embeddings = model.encode(sentences) # Compute similarity from sentence_transformers.util import cos_sim similarity = cos_sim(embeddings[0], embeddings[1]) ``` ## Fusion Method: concat Simple concatenation of text and ontology embeddings ## Training Process This model was created using the on2vec pipeline: 1. **Ontology Processing**: The OWL ontology was converted to a graph structure 2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships 3. **Text Integration**: Base model text embeddings were combined with ontological embeddings 4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types ## Intended Use This model is particularly effective for: - General domain text processing - Tasks requiring understanding of domain-specific relationships - Semantic similarity in specialized domains - Classification tasks with domain knowledge requirements ## Limitations - Performance may vary on domains different from the training ontology - Ontological knowledge is limited to concepts present in the source OWL file - May have higher computational requirements than vanilla text models ## Citation If you use this model, please cite the on2vec framework: ```bibtex @software{on2vec, title={on2vec: Ontology Embeddings with Graph Neural Networks}, author={David Steinberg}, url={https://github.com/david4096/on2vec}, year={2024} } ``` --- Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
besbesi/ppo-Huggy
besbesi
2025-09-18T11:43:37Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2025-09-18T11:43:31Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: besbesi/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
david4096/fbdv-all-MiniLM-L6-v2_concat_e100
david4096
2025-09-18T11:42:35Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "ontology", "on2vec", "graph-neural-networks", "base-all-MiniLM-L6-v2", "general", "general-ontology", "fusion-concat", "gnn-gcn", "small-ontology", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-09-18T11:42:32Z
--- base_model: all-MiniLM-L6-v2 library_name: sentence-transformers license: apache-2.0 pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - ontology - on2vec - graph-neural-networks - base-all-MiniLM-L6-v2 - general - general-ontology - fusion-concat - gnn-gcn - small-ontology --- # fbdv_all-MiniLM-L6-v2_concat_e100 This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks. ## Model Details - **Base Text Model**: all-MiniLM-L6-v2 - Text Embedding Dimension: 384 - **Ontology**: fbdv.owl - **Domain**: general - **Ontology Concepts**: 250 - **Concept Alignment**: 250/250 (100.0%) - **Fusion Method**: concat - **GNN Architecture**: GCN - **Structural Embedding Dimension**: 250 - **Output Embedding Dimension**: 64 - **Hidden Dimensions**: 128 - **Dropout**: 0.0 - **Training Date**: 2025-09-18 - **on2vec Version**: 0.1.0 - **Source Ontology Size**: 0.5 MB - **Model Size**: 89.9 MB - **Library**: on2vec + sentence-transformers ## Technical Architecture This model uses a multi-stage architecture: 1. **Text Encoding**: Input text is encoded using the base sentence-transformer model 2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships 3. **Fusion Layer**: Simple concatenation of text and ontological embeddings **Embedding Flow:** - Text: 384 dimensions → 128 hidden → 64 output - Structure: 250 concepts → GNN → 64 output - Fusion: concat → Final embedding ## How It Works This model combines: 1. **Text Embeddings**: Generated using the base sentence-transformer model 2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure 3. **Fusion Layer**: Combines both embedding types using the specified fusion method The ontological knowledge helps the model better understand domain-specific relationships and concepts. ## Usage ```python from sentence_transformers import SentenceTransformer # Load the model model = SentenceTransformer('fbdv_all-MiniLM-L6-v2_concat_e100') # Generate embeddings sentences = ['Example sentence 1', 'Example sentence 2'] embeddings = model.encode(sentences) # Compute similarity from sentence_transformers.util import cos_sim similarity = cos_sim(embeddings[0], embeddings[1]) ``` ## Fusion Method: concat Simple concatenation of text and ontology embeddings ## Training Process This model was created using the on2vec pipeline: 1. **Ontology Processing**: The OWL ontology was converted to a graph structure 2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships 3. **Text Integration**: Base model text embeddings were combined with ontological embeddings 4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types ## Intended Use This model is particularly effective for: - General domain text processing - Tasks requiring understanding of domain-specific relationships - Semantic similarity in specialized domains - Classification tasks with domain knowledge requirements ## Limitations - Performance may vary on domains different from the training ontology - Ontological knowledge is limited to concepts present in the source OWL file - May have higher computational requirements than vanilla text models ## Citation If you use this model, please cite the on2vec framework: ```bibtex @software{on2vec, title={on2vec: Ontology Embeddings with Graph Neural Networks}, author={David Steinberg}, url={https://github.com/david4096/on2vec}, year={2024} } ``` --- Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
david4096/eupath-all-MiniLM-L6-v2_concat_e100
david4096
2025-09-18T11:41:55Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "ontology", "on2vec", "graph-neural-networks", "base-all-MiniLM-L6-v2", "general", "general-ontology", "fusion-concat", "gnn-gcn", "medium-ontology", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-09-18T11:41:46Z
--- base_model: all-MiniLM-L6-v2 library_name: sentence-transformers license: apache-2.0 pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - ontology - on2vec - graph-neural-networks - base-all-MiniLM-L6-v2 - general - general-ontology - fusion-concat - gnn-gcn - medium-ontology --- # eupath_all-MiniLM-L6-v2_concat_e100 This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks. ## Model Details - **Base Text Model**: all-MiniLM-L6-v2 - Text Embedding Dimension: 384 - **Ontology**: eupath.owl - **Domain**: general - **Ontology Concepts**: 5,109 - **Concept Alignment**: 5,109/5,109 (100.0%) - **Fusion Method**: concat - **GNN Architecture**: GCN - **Structural Embedding Dimension**: 5109 - **Output Embedding Dimension**: 64 - **Hidden Dimensions**: 128 - **Dropout**: 0.0 - **Training Date**: 2025-09-18 - **on2vec Version**: 0.1.0 - **Source Ontology Size**: 5.2 MB - **Model Size**: 135.8 MB - **Library**: on2vec + sentence-transformers ## Technical Architecture This model uses a multi-stage architecture: 1. **Text Encoding**: Input text is encoded using the base sentence-transformer model 2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships 3. **Fusion Layer**: Simple concatenation of text and ontological embeddings **Embedding Flow:** - Text: 384 dimensions → 128 hidden → 64 output - Structure: 5109 concepts → GNN → 64 output - Fusion: concat → Final embedding ## How It Works This model combines: 1. **Text Embeddings**: Generated using the base sentence-transformer model 2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure 3. **Fusion Layer**: Combines both embedding types using the specified fusion method The ontological knowledge helps the model better understand domain-specific relationships and concepts. ## Usage ```python from sentence_transformers import SentenceTransformer # Load the model model = SentenceTransformer('eupath_all-MiniLM-L6-v2_concat_e100') # Generate embeddings sentences = ['Example sentence 1', 'Example sentence 2'] embeddings = model.encode(sentences) # Compute similarity from sentence_transformers.util import cos_sim similarity = cos_sim(embeddings[0], embeddings[1]) ``` ## Fusion Method: concat Simple concatenation of text and ontology embeddings ## Training Process This model was created using the on2vec pipeline: 1. **Ontology Processing**: The OWL ontology was converted to a graph structure 2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships 3. **Text Integration**: Base model text embeddings were combined with ontological embeddings 4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types ## Intended Use This model is particularly effective for: - General domain text processing - Tasks requiring understanding of domain-specific relationships - Semantic similarity in specialized domains - Classification tasks with domain knowledge requirements ## Limitations - Performance may vary on domains different from the training ontology - Ontological knowledge is limited to concepts present in the source OWL file - May have higher computational requirements than vanilla text models ## Citation If you use this model, please cite the on2vec framework: ```bibtex @software{on2vec, title={on2vec: Ontology Embeddings with Graph Neural Networks}, author={David Steinberg}, url={https://github.com/david4096/on2vec}, year={2024} } ``` --- Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
david4096/fao-all-MiniLM-L6-v2_concat_e100
david4096
2025-09-18T11:41:42Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "ontology", "on2vec", "graph-neural-networks", "base-all-MiniLM-L6-v2", "general", "general-ontology", "fusion-concat", "gnn-gcn", "small-ontology", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-09-18T11:41:39Z
--- base_model: all-MiniLM-L6-v2 library_name: sentence-transformers license: apache-2.0 pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - ontology - on2vec - graph-neural-networks - base-all-MiniLM-L6-v2 - general - general-ontology - fusion-concat - gnn-gcn - small-ontology --- # fao_all-MiniLM-L6-v2_concat_e100 This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks. ## Model Details - **Base Text Model**: all-MiniLM-L6-v2 - Text Embedding Dimension: 384 - **Ontology**: fao.owl - **Domain**: general - **Ontology Concepts**: 116 - **Concept Alignment**: 116/116 (100.0%) - **Fusion Method**: concat - **GNN Architecture**: GCN - **Structural Embedding Dimension**: 116 - **Output Embedding Dimension**: 64 - **Hidden Dimensions**: 128 - **Dropout**: 0.0 - **Training Date**: 2025-09-18 - **on2vec Version**: 0.1.0 - **Source Ontology Size**: 0.2 MB - **Model Size**: 88.7 MB - **Library**: on2vec + sentence-transformers ## Technical Architecture This model uses a multi-stage architecture: 1. **Text Encoding**: Input text is encoded using the base sentence-transformer model 2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships 3. **Fusion Layer**: Simple concatenation of text and ontological embeddings **Embedding Flow:** - Text: 384 dimensions → 128 hidden → 64 output - Structure: 116 concepts → GNN → 64 output - Fusion: concat → Final embedding ## How It Works This model combines: 1. **Text Embeddings**: Generated using the base sentence-transformer model 2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure 3. **Fusion Layer**: Combines both embedding types using the specified fusion method The ontological knowledge helps the model better understand domain-specific relationships and concepts. ## Usage ```python from sentence_transformers import SentenceTransformer # Load the model model = SentenceTransformer('fao_all-MiniLM-L6-v2_concat_e100') # Generate embeddings sentences = ['Example sentence 1', 'Example sentence 2'] embeddings = model.encode(sentences) # Compute similarity from sentence_transformers.util import cos_sim similarity = cos_sim(embeddings[0], embeddings[1]) ``` ## Fusion Method: concat Simple concatenation of text and ontology embeddings ## Training Process This model was created using the on2vec pipeline: 1. **Ontology Processing**: The OWL ontology was converted to a graph structure 2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships 3. **Text Integration**: Base model text embeddings were combined with ontological embeddings 4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types ## Intended Use This model is particularly effective for: - General domain text processing - Tasks requiring understanding of domain-specific relationships - Semantic similarity in specialized domains - Classification tasks with domain knowledge requirements ## Limitations - Performance may vary on domains different from the training ontology - Ontological knowledge is limited to concepts present in the source OWL file - May have higher computational requirements than vanilla text models ## Citation If you use this model, please cite the on2vec framework: ```bibtex @software{on2vec, title={on2vec: Ontology Embeddings with Graph Neural Networks}, author={David Steinberg}, url={https://github.com/david4096/on2vec}, year={2024} } ``` --- Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
anirudh1101/lora-llama2-imdb
anirudh1101
2025-09-18T11:41:41Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-18T11:41:36Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
david4096/ecto-all-MiniLM-L6-v2_concat_e100
david4096
2025-09-18T11:41:24Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "ontology", "on2vec", "graph-neural-networks", "base-all-MiniLM-L6-v2", "general", "general-ontology", "fusion-concat", "gnn-gcn", "large-ontology", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-09-18T11:41:13Z
--- base_model: all-MiniLM-L6-v2 library_name: sentence-transformers license: apache-2.0 pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - ontology - on2vec - graph-neural-networks - base-all-MiniLM-L6-v2 - general - general-ontology - fusion-concat - gnn-gcn - large-ontology --- # ecto_all-MiniLM-L6-v2_concat_e100 This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks. ## Model Details - **Base Text Model**: all-MiniLM-L6-v2 - Text Embedding Dimension: 384 - **Ontology**: ecto.owl - **Domain**: general - **Ontology Concepts**: 11,864 - **Concept Alignment**: 11,864/11,864 (100.0%) - **Fusion Method**: concat - **GNN Architecture**: GCN - **Structural Embedding Dimension**: 11864 - **Output Embedding Dimension**: 64 - **Hidden Dimensions**: 128 - **Dropout**: 0.0 - **Training Date**: 2025-09-18 - **on2vec Version**: 0.1.0 - **Source Ontology Size**: 38.2 MB - **Model Size**: 199.3 MB - **Library**: on2vec + sentence-transformers ## Technical Architecture This model uses a multi-stage architecture: 1. **Text Encoding**: Input text is encoded using the base sentence-transformer model 2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships 3. **Fusion Layer**: Simple concatenation of text and ontological embeddings **Embedding Flow:** - Text: 384 dimensions → 128 hidden → 64 output - Structure: 11864 concepts → GNN → 64 output - Fusion: concat → Final embedding ## How It Works This model combines: 1. **Text Embeddings**: Generated using the base sentence-transformer model 2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure 3. **Fusion Layer**: Combines both embedding types using the specified fusion method The ontological knowledge helps the model better understand domain-specific relationships and concepts. ## Usage ```python from sentence_transformers import SentenceTransformer # Load the model model = SentenceTransformer('ecto_all-MiniLM-L6-v2_concat_e100') # Generate embeddings sentences = ['Example sentence 1', 'Example sentence 2'] embeddings = model.encode(sentences) # Compute similarity from sentence_transformers.util import cos_sim similarity = cos_sim(embeddings[0], embeddings[1]) ``` ## Fusion Method: concat Simple concatenation of text and ontology embeddings ## Training Process This model was created using the on2vec pipeline: 1. **Ontology Processing**: The OWL ontology was converted to a graph structure 2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships 3. **Text Integration**: Base model text embeddings were combined with ontological embeddings 4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types ## Intended Use This model is particularly effective for: - General domain text processing - Tasks requiring understanding of domain-specific relationships - Semantic similarity in specialized domains - Classification tasks with domain knowledge requirements ## Limitations - Performance may vary on domains different from the training ontology - Ontological knowledge is limited to concepts present in the source OWL file - May have higher computational requirements than vanilla text models ## Citation If you use this model, please cite the on2vec framework: ```bibtex @software{on2vec, title={on2vec: Ontology Embeddings with Graph Neural Networks}, author={David Steinberg}, url={https://github.com/david4096/on2vec}, year={2024} } ``` --- Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
david4096/ecocore-all-MiniLM-L6-v2_concat_e100
david4096
2025-09-18T11:40:30Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "ontology", "on2vec", "graph-neural-networks", "base-all-MiniLM-L6-v2", "general", "general-ontology", "fusion-concat", "gnn-gcn", "medium-ontology", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-09-18T11:40:23Z
--- base_model: all-MiniLM-L6-v2 library_name: sentence-transformers license: apache-2.0 pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - ontology - on2vec - graph-neural-networks - base-all-MiniLM-L6-v2 - general - general-ontology - fusion-concat - gnn-gcn - medium-ontology --- # ecocore_all-MiniLM-L6-v2_concat_e100 This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks. ## Model Details - **Base Text Model**: all-MiniLM-L6-v2 - Text Embedding Dimension: 384 - **Ontology**: ecocore.owl - **Domain**: general - **Ontology Concepts**: 5,586 - **Concept Alignment**: 5,586/5,586 (100.0%) - **Fusion Method**: concat - **GNN Architecture**: GCN - **Structural Embedding Dimension**: 5586 - **Output Embedding Dimension**: 64 - **Hidden Dimensions**: 128 - **Dropout**: 0.0 - **Training Date**: 2025-09-18 - **on2vec Version**: 0.1.0 - **Source Ontology Size**: 19.7 MB - **Model Size**: 140.2 MB - **Library**: on2vec + sentence-transformers ## Technical Architecture This model uses a multi-stage architecture: 1. **Text Encoding**: Input text is encoded using the base sentence-transformer model 2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships 3. **Fusion Layer**: Simple concatenation of text and ontological embeddings **Embedding Flow:** - Text: 384 dimensions → 128 hidden → 64 output - Structure: 5586 concepts → GNN → 64 output - Fusion: concat → Final embedding ## How It Works This model combines: 1. **Text Embeddings**: Generated using the base sentence-transformer model 2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure 3. **Fusion Layer**: Combines both embedding types using the specified fusion method The ontological knowledge helps the model better understand domain-specific relationships and concepts. ## Usage ```python from sentence_transformers import SentenceTransformer # Load the model model = SentenceTransformer('ecocore_all-MiniLM-L6-v2_concat_e100') # Generate embeddings sentences = ['Example sentence 1', 'Example sentence 2'] embeddings = model.encode(sentences) # Compute similarity from sentence_transformers.util import cos_sim similarity = cos_sim(embeddings[0], embeddings[1]) ``` ## Fusion Method: concat Simple concatenation of text and ontology embeddings ## Training Process This model was created using the on2vec pipeline: 1. **Ontology Processing**: The OWL ontology was converted to a graph structure 2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships 3. **Text Integration**: Base model text embeddings were combined with ontological embeddings 4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types ## Intended Use This model is particularly effective for: - General domain text processing - Tasks requiring understanding of domain-specific relationships - Semantic similarity in specialized domains - Classification tasks with domain knowledge requirements ## Limitations - Performance may vary on domains different from the training ontology - Ontological knowledge is limited to concepts present in the source OWL file - May have higher computational requirements than vanilla text models ## Citation If you use this model, please cite the on2vec framework: ```bibtex @software{on2vec, title={on2vec: Ontology Embeddings with Graph Neural Networks}, author={David Steinberg}, url={https://github.com/david4096/on2vec}, year={2024} } ``` --- Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
david4096/doid-all-MiniLM-L6-v2_concat_e100
david4096
2025-09-18T11:40:11Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "ontology", "on2vec", "graph-neural-networks", "base-all-MiniLM-L6-v2", "general", "general-ontology", "fusion-concat", "gnn-gcn", "large-ontology", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-09-18T11:39:58Z
--- base_model: all-MiniLM-L6-v2 library_name: sentence-transformers license: apache-2.0 pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - ontology - on2vec - graph-neural-networks - base-all-MiniLM-L6-v2 - general - general-ontology - fusion-concat - gnn-gcn - large-ontology --- # doid_all-MiniLM-L6-v2_concat_e100 This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks. ## Model Details - **Base Text Model**: all-MiniLM-L6-v2 - Text Embedding Dimension: 384 - **Ontology**: doid.owl - **Domain**: general - **Ontology Concepts**: 14,339 - **Concept Alignment**: 14,339/14,339 (100.0%) - **Fusion Method**: concat - **GNN Architecture**: GCN - **Structural Embedding Dimension**: 14339 - **Output Embedding Dimension**: 64 - **Hidden Dimensions**: 128 - **Dropout**: 0.0 - **Training Date**: 2025-09-18 - **on2vec Version**: 0.1.0 - **Source Ontology Size**: 26.1 MB - **Model Size**: 222.7 MB - **Library**: on2vec + sentence-transformers ## Technical Architecture This model uses a multi-stage architecture: 1. **Text Encoding**: Input text is encoded using the base sentence-transformer model 2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships 3. **Fusion Layer**: Simple concatenation of text and ontological embeddings **Embedding Flow:** - Text: 384 dimensions → 128 hidden → 64 output - Structure: 14339 concepts → GNN → 64 output - Fusion: concat → Final embedding ## How It Works This model combines: 1. **Text Embeddings**: Generated using the base sentence-transformer model 2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure 3. **Fusion Layer**: Combines both embedding types using the specified fusion method The ontological knowledge helps the model better understand domain-specific relationships and concepts. ## Usage ```python from sentence_transformers import SentenceTransformer # Load the model model = SentenceTransformer('doid_all-MiniLM-L6-v2_concat_e100') # Generate embeddings sentences = ['Example sentence 1', 'Example sentence 2'] embeddings = model.encode(sentences) # Compute similarity from sentence_transformers.util import cos_sim similarity = cos_sim(embeddings[0], embeddings[1]) ``` ## Fusion Method: concat Simple concatenation of text and ontology embeddings ## Training Process This model was created using the on2vec pipeline: 1. **Ontology Processing**: The OWL ontology was converted to a graph structure 2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships 3. **Text Integration**: Base model text embeddings were combined with ontological embeddings 4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types ## Intended Use This model is particularly effective for: - General domain text processing - Tasks requiring understanding of domain-specific relationships - Semantic similarity in specialized domains - Classification tasks with domain knowledge requirements ## Limitations - Performance may vary on domains different from the training ontology - Ontological knowledge is limited to concepts present in the source OWL file - May have higher computational requirements than vanilla text models ## Citation If you use this model, please cite the on2vec framework: ```bibtex @software{on2vec, title={on2vec: Ontology Embeddings with Graph Neural Networks}, author={David Steinberg}, url={https://github.com/david4096/on2vec}, year={2024} } ``` --- Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
koureasstavros/TheLittleBaby
koureasstavros
2025-09-18T11:40:03Z
0
1
transformers
[ "transformers", "ai", "language", "model", "llm", "slm", "train", "inference", "extract", "pure numpy", "en", "dataset:shakespeare", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-08-05T15:21:23Z
--- language: ["en"] tags: ["ai", "language", "model", "llm", "slm", "train", "inference", "extract", "transformers", "pure numpy"] datasets: ["shakespeare"] license: "apache-2.0" base_model: "gpt" version: v0.1.12 --- # 👶 The Little Baby - A barebones GPT-style Language Model implementation — pure Python, zero dependencies. ## 🧠 Description **The Little Baby** is a minimalist language model crafted entirely in **pure Python using just Numpy / CuPy**. It requires no external packages, libraries, or frameworks to function. Both **training** and **inference** are achieved through low-level operations and hand-built logic — making this project ideal for educational deep dives and experimental tinkering. This repository is designed to reveal the **inner mechanics** of a GPT-style transformer model and demystify the "magic" behind modern language models through readable and hackable code. ## 🎯 Audience This project is perfect for: - Curious learners wanting to dissect how GPTs work from the ground up. - Researchers experimenting with primitive architectures. - Engineers exploring early-stage LLM behaviors. - Anyone who enjoys coding like it's 2010 — no imports, just raw power. ## 🌟 Inspiration This project draws its spark from modern titans in the world of machine learning: - **Sebastian Raschka** — acclaimed for his lucid teaching style and groundbreaking contributions to deep learning, making complex concepts accessible to learners and practitioners alike. - **Andrej Karpathy** — influential in shaping the landscape of computer vision and generative models, while championing open-source AI education that empowers a global community of developers. - **Yann Dubois** — instrumental in designing scalable evaluation frameworks for large language models, notably AlpacaEval and AlpacaFarm, which bring automation closer to the nuance of human feedback. Their work inspired the spirit of transparency, curiosity, and simplicity that fuels *The Little Baby* — a model built not for production, but for understanding. - “Build it, break it, learn from it.” – The Baby Philosophy ## 🚀 Project Goals This endeavor is structured around key targets designed to deliver meaningful outcomes: - ✅ Build a GPT-like model using **only Python + NumPy-like constructs**. - ✅ Support training from scratch on plain text files. - ✅ Provide clear code for attention mechanisms, tokenization, and backprop. - ✅ Encourage experimentation and modification. ## 📚 Directory Files Each run generates some unique files, identified by a GUID tag. These files capture different aspects of the model's execution: - **🗃️ Dataset Input** `inputs/<FILENAME>.txt` A config file containing the configuration of the each iteration. - **⚙️ Config Snapshot** `configs/config_<GUID>.json` A config file containing the configuration of the each iteration. - **🧠 Model Snapshot** `models/model_<GUID>.json` Model object including learned weights, biases, which are the internal parameters. - **🔤 Tokenizer Snapshot** `tokenizers/tokenizer_<GUID>.json` Tokenizer object including vocabilary of the input data and their positioning. - **📝 Report Output** `outputs/report_<GUID>.json` A comprehensive log containing training analysis, and performance metrics. - **🗣️ Completion Output** `outputs/completion_<GUID>.json` The raw generated text from the model's inference — your baby’s words in print! ## 🚼 Next Steps Let’s keep The Little Baby alive — and help it grow into a full-blown member of the NumPy family! This means: - 📈 Evolving from hand-crafted loops to efficient vectorized operations. - 🧮 Embracing numerical abstractions while maintaining full transparency. - 🛠️ Exploring performance tricks, batch parallelism, and experimental features. - 🧬 Bridging the gap between simplicity and capability — one token at a time. The journey from babbling to brilliance starts here. Let's raise this little one right! ## ⚖️ License Summary You're free to: - ✅ **Use it** for any purpose — personal, educational, or commercial - 💡 **Suggest ideas** and contribute improvements - 🍴 **Fork it** and build upon the code - 💰 **Sell it** or use it in a product As long as: - 📌 You **reference the original author and project** clearly in any public distribution or commercial use ## 👨‍👩‍👧 Credits The Little Baby owes its lineage to two brilliant minds in the AI family tree: - 👑 **Ownser**: Koureas Stavros | Product Architect BI / AI — lovingly crafted and cared - 🧔 **Father**: OpenAI GPT 4.1 — provider of deep generative DNA and thoughtful token flow - 🧑‍🍼 **Mother**: Google Gemini 2.5 — donor of wide context windows and clever architectural chromosomes - 🧙 **Godparent**: Claude Sonnet 4.0 — gentle guide and lifelong companion, whispering wisdom and weaving clarity Together, they gifted the foundational strands that allowed this little one to generate helpful code and take its first linguistic steps. ## 📋 Prerequisites The Little Baby doesn’t ask for much—just a few cozy things to get started: - If you're using the CPU, make sure NumPy is tucked into your Python environment. If it’s missing, you can gently place it there yourself. But don’t worry—if you forget, Little Baby will wiggle its fingers and install it for you. - If you're using the GPU, then CuPy is the magic blanket Little Baby needs. If it’s not already there, you can wrap it in manually. Otherwise, Little Baby will try to knit it from scratch—but that takes time, because it has to match your CUDA version perfectly. If you want to help Little Baby wake up faster, you can give it the right CuPy-CUDA library directly. ## 🧪 Instructions To get started with this project, clone the code, download the tokenizers abd pre-trained models if needed, and follow the setup steps below to run the notebook and select your desired configuration. 📺 [Watch The Little Baby on YouTube](https://www.youtube.com/watch?v=mFGstjMU1Dw) **Get objects** - You can access the code on GitHub (https://github.com/koureasstavros/TheLittleBaby), simply clone the repository. - You can access the pre-trained tokenizers and models on Hugging Face (https://huggingface.co/koureasstavros/TheLittleBaby), simply download the tokenizer and model files. In case you have low speed internet connection check the analysis table select a guid and pick a specific guid for config, tokenizer and model. The config, tokenizer and model files are needed only if you are going to perform finetune or inference without training your own. - Then, you should: - place the tokenizer file or tokenizer files into the tokenizers folder in the same file structure. - place the model file or model files into the models folder in the same file structure, make sure about enough space. **Configure Environment** - Based on the environment different posibilities and features are available - If you are running localhost then you can choose to process on CPU or GPU - If you select gpu make sure that you know if your system supports cuda or tensor - If you are running on a cloud provider you need to know certain things - If you select Google Colab with GPU, make sure that you specify the proper cuda version based on selected gpu, because Google Colab seems that cannot build of wheels for gpu because it does not exposes the nvcc and therefore if you keep cuda version to auto it will hung. - If you select Kaggle with GPU, make sure that you specify the proper cuda version on selected gpu, because it will take realy lot of time to build wheels with cuda version auto, in addition there is different path for reading uploaded files with read only permission and different path for output files that has write permission. **Start the Notebook** - Open the `.ipynb` file in a Python kernel (e.g. Jupyter, VS Code, Colab). - Run all cells in the notebook **Select Path** - Choose the relative path between ipynb and folders: - `same`, if the notebook is into the same path with folders - `<path>`, if the notebook is into different path than the folders **Select Plan** - Choose one of the following plan modes: - `train`, to train a new model (based on settings file) - `finetune`, to finetune a pre-trained model - `inference`, to inference using a pre-trained model - `delete`, to delete all relative files of a pretrained model - `info`, to get only information of a pretrained model That's it! ## 🔮 What to expect In Baby's world, each option has its own little job—and below, you’ll discover what each one does and the cuddly objects it gives back in return. #### 🔧 Train - Begins training using parameters defined in earlier Python blocks. - A config file containing the settings will be generated with format `config_<guid>`. - A tokenizer file containing the vocabilary will be generated with format `tokenizer_<guid>`. - A model file containing the weights and biases will be generated with format `model_<guid>`. - A report file containing the training analysis will be generated with format `report_<guid>`. - A completion file containing the generation will be generated with format `complation_<guid>` using an empty prompt. #### 🛠️ Finetune - Begins finetuning using a **base model** and a **custom training dataset**. - Requires the **GUID** of the base to locate `config_<guid>`, `tokenizer_<guid>` and `model_<guid>`. - A tokenizer file containing the vocabilary will be generated with format `tokenizer_<guid>_fineuned`. - A model file containing the weights and biases will be generated with format `model_<guid>_finetuned`. - A report file containing the training analysis will be generated with format `report_<guid>_fineuned`. - A completion file containing the generation will be generated with format `completion_<guid>_finetuned` using an empty prompt. #### 💬 Inference - Requires the **GUID** of the trained model to find the `config_<guid>`, `tokenizer_<guid>` and `model_<guid>`. - You must also provide a **prompt** for the model inference to respond to, if not leave empty to continue on trained text. - A completion file containing the generation will be generated with format `completion_<guid>_<yyyymmddhhmmss>` using the prompt. #### 🗑️ Delete - Requires the **GUID** of the trained model to find the `config_<guid>`, `tokenizer_<guid>` and `model_<guid>`. - The files `config_<guid>`, `tokenizer_<guid>`, `model_<guid>`, `report_<guid>`, `complation_<guid>` will be deleted #### ℹ️ Info - Requires the **GUID** of the trained model to find the `config_<guid>`, `tokenizer_<guid>` and `model_<guid>`. - An output with information will be provided. After lot of hours of training on a single document of multiple Shakespeare works using a **laptop CPU**, The Little Baby learns to babble. Its speech is primitive and childlike — just enough to make you smile and realize… the baby is alive. While its capabilities are minimal, its structure is maximal in transparency. Every token, gradient, and parameter is visible and malleable. *Keep in mind that if you're running a process in VSCode and your workstation, PC, or laptop enters hibernation, the process will resume automatically once the device is powered back on. ## 🍼 Cry. Babble. Speak. Repeat. Here come the smartest little settings to help the model learn and grow big and strong from this data: - **Age 3 Months** - 33bd6583-1b87-4469-b55e-0ccb8fd0441c - Coos and gurgles begin. Sound, not speech—yet something’s brewing. - **Age 6 Months** - 180eeb27-b1b4-4427-9734-c70e10da2005 - Loud, random cries. It’s not talking, but it's definitely expressive. - **Age 12 Months** - 5f13a2ab-113a-4c2c-8abd-40384bdd8854 - Joyful noise with hints of intention. Real words still warming up. - **Age 24 Months** - cb632ce3-3f3b-432b-b24f-9171005f205e - Words arrive —Chaotic, quirky, delightful. Syntax? Optional. - **Age 48 Months** - 12b8b053-6c14-42aa-a957-89b809e6f785 - Mini Philosopher Mode -Stories, opinions, even jokes. Communication unlocked.hear them. *Keep in mind that these are pre-trained model executions available for finetune or inference. You can bypass the training phase by simply downloading the models and using them directly. ## ⚙️ Parameters These hyperparameters collectively define the training process, where a model's architecture—specified by its depth (n_layers), width (n_emb), attention span (n_ctx), and attention mechanism (n_heads, head_size)—is optimized over a set number of num_epochs using a specific batch_size and learning rate (lr), with dropout applied to improve generalization. - **c_device** - Values: cpu, gpu - What it is: Specifies the hardware device used for executing model operations—either the central processing unit (cpu) or the graphics processing unit (gpu). - Size: While it doesn’t directly affect parameter count, it can influence model deployment size due to differences in memory handling and batch processing capabilities. - Speed: While it doesn’t directly affect parameter count, it significantly impacts model speed—gpu enables faster parallel computation, whereas cpu is better suited for lightweight or sequential tasks. - Quality: Device choice doesn’t alter model accuracy, but slower execution on cpu may affect responsiveness in real-time applications, while gpu allows for more efficient training and inference cycles. - **c_device_cpu_cores** - Values: [1, *] - What it is: Specifies the number of CPU cores available for executing model operations. - Size: Doesn’t directly affect model parameter count, but may influence memory allocation and parallel processing capacity. - Speed: More cores can improve throughput for preprocessing and lightweight inference tasks, though still slower than GPU for deep learning workloads. - Quality: No direct impact on model accuracy, but limited cores may reduce responsiveness in real-time or multi-threaded environments. - **c_device_gpu_core** - Values: [0, *] - What it is: Identifies the specific GPU core or device used for model execution. - Size: Doesn’t change model parameters, but selecting a more powerful GPU can enable larger batch sizes and more complex models. - Speed: Affects execution speed depending on the GPU’s architecture, memory bandwidth, and compute capability. - Quality: Indirectly improves training and inference quality by enabling faster iteration and better resource utilization. - **c_device_gpu_tensor** - Values: [0, 1] - What it is: Refers to the tensor-level operations executed on the GPU, typically involving matrix multiplications and attention mechanisms. - Size: Doesn’t alter parameter count, but efficient tensor handling allows for larger models and more scalable training. - Speed: Critical for accelerating deep learning workloads; optimized tensor operations dramatically reduce training and inference time. - Quality: Enhances model performance by supporting high-throughput computation, especially in large-scale or multi-modal architectures. - **c_tokenizer** - Values: [char] - What it is: Strategy for tokenizing sequences. - Size: While it doesn’t directly affect parameter count, it does influence model size due to differences in vocabulary structure. - Speed: While it doesn’t directly affect parameter count, it does influence model speed due to differences in vocabulary structure. - Quality: When texts contain errors, it can negatively affect training and inference quality. - **c_sequence** - Values: [pre, post] - What it is: Strategy for constructing block sequences. - Size: No direct impact on model size. - Speed: No direct impact on performance. - Quality: Proper sequence construction affects how well long dependencies are exposed. Future variants could improve learning efficiency on heterogeneous corpora. - **c_attention** - Values: [mha, moh, gqa, swh, aft] - What it is: Chosen attention mechanism implementation. - Size: Attention choice impacts model size. - Speed: Attention choice impacts model speed. - Quality: Attention choice influences how diverse relational patterns are captured. - **c_network** - Values: [mlp, moe, lor, swi, nft] - What it is: Chosen network mechanism implementation. - Size: Network choice impacts model size. - Speed: Network choice impacts model speed. - Quality: Network choice impacts representational richness and efficiency. - **n_ctx** - Values: [8 : ****] - What it is: The maximum number of tokens (characters, in this case) the model can look at in a single sequence to make a prediction. It's the model's "attention span". - Size: Directly increases the size of the positional embedding table (n_ctx x n_emb), adding more parameters to the model. - Speed: Has a major impact. The self-attention mechanism's computation grows quadratically with the context length (O(n_ctx²)). Doubling n_ctx will roughly quadruple the time and memory needed for the attention layers, making it one of the most expensive parameters to increase. - Quality: A larger n_ctx allows the model to learn longer-range dependencies in the text, which can significantly improve quality for tasks that require understanding context over long passages. - **n_emb** - Values: [8 : ****] - What it is: The size of the vector used to represent each token. It defines the "width" of the model. - Size: Has a major impact on model size. It increases the size of token and positional embeddings, and scales the weight matrices in the attention and MLP layers, significantly increasing the total parameter count. - Speed: Increasing n_emb increases the size of nearly all weight matrices in the model. This leads to more parameters, which increases both memory usage and the time required for matrix multiplications. The impact is significant but generally more linear than n_ctx. - Quality: A larger n_emb gives the model more capacity to learn rich, complex representations of tokens and their relationships. This can lead to a more powerful and accurate model, but also increases the risk of overfitting if the model is too large for the dataset. - **head_size** - Values: [8 : ****] - What it is: The total dimensionality of the concatenated attention heads. This dimension is projected from the input embedding (n_emb) to create the Query, Key, and Value matrices. - Size: Directly increases the number of parameters in each attention block by defining the size of the Q, K, V, and output projection matrices. - Speed: Directly affects the size of the Q, K, and V projection matrices. A larger head_size increases the number of computations and memory usage within each attention block. - Quality: A larger head_size gives the model more representational power within the attention mechanism. It must be divisible by n_heads. - **n_heads** - Values: [1 : ****] - What it is: The attention mechanism is split into multiple "heads" that perform attention calculations in parallel. Each head can learn to focus on different types of relationships in the data. - Size: Has no direct impact on model size, as it only determines how the head_size dimension is partitioned for parallel computation. - Speed: The computations for each head can be parallelized. On capable hardware, increasing the number of heads might not slow down training significantly if the head_size is kept constant. - Quality: Allows the model to simultaneously attend to information from different representation subspaces at different positions. This is a core concept of the Transformer and generally leads to a much better model than a single attention head. - **n_layers** - Values: [1 : ****] - What it is: The number of Transformer blocks stacked on top of each other. This defines the "depth" of the model. - Size: Has a direct, linear impact on model size. Each layer adds a block with attention layers and network layers. - Speed: The impact is linear. Doubling n_layers will roughly double the training time and the number of model parameters, as the input data must pass through each block sequentially. - Quality: More layers allow the model to learn more complex and abstract features. Deeper models are generally more powerful, but also more prone to overfitting and can be harder to train (though residual connections help mitigate this). - **n_epochs** - Values: [1 : ****] - What it is: The number of times the training process will iterate over the entire training dataset. - Size: Has a direct, linear impact on model size. Each layer adds a complete set of Transformer block parameters, roughly doubling the model's core parameter count if you double the layers. - Speed: Directly and linearly impacts total training time. More epochs mean longer training. - Quality: Too few epochs will lead to an undertrained model (underfitting). Too many can lead to the model memorizing the training data (overfitting), which hurts its performance on new data. The ideal number is usually found by monitoring the validation loss. - **batch_size** - Values: [1 : ****] - What it is: The number of training sequences (each of length n_ctx) processed in one forward/backward pass. - Size: Has no impact on the size of the model. - Speed: A larger batch_size allows for more parallelization, generally leading to faster training (fewer updates per epoch). However, it also requires more memory. - Quality: This is a trade-off. Larger batches provide a more accurate and stable gradient estimate, but the noise from smaller batches can act as a regularizer, helping the model find a better minimum and generalize better. - **r_dropout** - Values: [0.1 : 0.001] - What it is: A regularization technique where a fraction of neuron activations are randomly set to zero during each training step. This prevents the model from becoming too reliant on any single neuron. - Size: Has no impact on the size of the model. - Speed: Has a negligible impact on training speed and no impact on inference speed (it's disabled during evaluation). - Quality: Crucial for improving model generalization and preventing overfitting. By forcing the network to learn redundant representations, it makes the model more robust. The value (e.g., 0.1) is the probability of a neuron being dropped. - **r_learn** - Values: [0.1 : 0.0001] - What it is: Controls how much the model's weights are adjusted with respect to the loss gradient. It determines the step size at each iteration. - Size: Has no impact on the size of the model. - Speed: Affects the speed of convergence. A higher learning rate might converge faster, but risks overshooting the optimal weights. A lower earning rate is more stable but can be very slow to converge. - Quality: This is one of the most critical parameters. If it's too high, the training can become unstable and diverge. If it's too low, the model may get stuck in a suboptimal solution or take too long to train. The AdamW optimizer helps adapt the learning rate, but the initial value is still very important. - **s_warmup** - Values: [none, auto, 1 : 0.0001] - What it is: Controls how much the model's steps are contributing to the training weights based on proportional learning rate. - Size: Has no impact on the number of parameters in the model. - Speed: Affects the speed of convergence based on proportional learning rate. - Quality: This is one of the most critical parameters. If it's too high, the optimizer will process high number of steps until it reaches the full learning rate. If it's too low, the optimizer will process a few number of steps until it reaches the full learning rate. - **c_shuffle** - Values: [false, true] - What it is: Controls the shuffling of data during tokenizer process for training and validation. - Size: Has no impact on the size of the model. - Speed: Has no impact on the speed of the model. - Quality: This is one of the most critical parameters. If it's set to false, the validation loss could be very high due to biased split of training / validation as the last part might be systemicaly different but the reproducability and coparizon among models is easy. If it's set to true, the validation loss would be more accurate due to unbiased split of training / validation but the reproducability and coparizon among models is difficult due different batch content, a random seed might solve this issue. - **r_split** - Values: [0.1 : 0.9] - What it is: Controls the splitting of data during tokenizer process for training and validation. - Size: Has no impact on the size of the model. - Speed: Has impact on the speed of the model as the training will be slower and validation faster as long this ratio increases. - Quality: This is one of the most critical parameters. If it's set to low, the training will not be able to see and learn the input content. If it's set to high, the training will be able to see and learn the input content, but the validation will not have enough batches. ## 📐 Formulas Even our little language models have their favorite rules to follow—turns out, they quietly cuddle up to some clever mathematical formulas that help them make sense of the world. - **Learning Rate** ```LR_new = LR_old * (B_new / B_old)``` New Learning Rate (LR_new) is based on Old Learning Rate (LR_old), New Batch size (B_new), Old Batch size (B_old). - **Total Parameters** ``` P = V × H # token embeddings + L × [ 3 × H × H # Q, K, V projections + H × H # output projection from attention + 4 × H × F # feedforward up-projection + 4 × F × H # feedforward down-projection + biases (small) ] ``` Total parameters are based on Vocabilary Size (V), Head Size / Embedding Size (H), Layer Number (L), Feedforward intermidiate Size (F). - **Token Thoughput for training** ```T = 20-40 per P``` Token number processed per Parameter (P) is 20-40. - **Flops Thoughput for training** ```F = 6 * T * P``` Flops are based on 6 (2 ops for forward pass and 4 ops for backward pass), Number of Tokens (T), Number of Parameters (P). - **Memory for training** ``` 4GBM = batch_size=4, n_ctx=128, n_emb=128, n_layers=4 8GBM = batch_size=4, n_ctx=256, n_emb=128, n_layers=4 16GBM = batch_size=4, n_ctx=512, n_emb=128, n_layers=4 8GBM = batch_size=8, n_ctx=128, n_emb=128, n_layers=4 16GBM = batch_size=16, n_ctx=128, n_emb=128, n_layers=4 ``` ## 🏛️ Architecture A language model architecture is a combination of attention and a neural network design—often based on transformers—that processes and generates human-like text by learning patterns from large-scale language data. ![Architecture Diagram](material/LittleBaby.drawio.svg) ### 👁️ Attention Variants Complexity Table Attention mechanism helps a language model decide which words (or tokens) in a sentence are most relevant when generating or interpreting another word. It’s like giving the model a spotlight to focus on the most important parts of the input. ![Architecture Diagram](material/LittleBaby_attention.drawio.svg) | Variant | Uses Q/K/V? | Complexity | Notes | Details | |--------|--------------|------------|------------|------------| | **MHA** (Multi-Head Attention) | Separate Q, K, V per head | **O(B·T²·H·d_k)** | Standard Transformer attention; expensive for long sequences | Standard full multi‑head attention. | | **MOH** (Multi-Output Head) | Typically uses Q/K/V | **O(B·T²·H·d_k)** | Less common; focuses on output diversity rather than input projection | Full QKᵀ for all heads + softmax gating over heads. | | **GQA** (Grouped-Query Attention) | Shared K/V per group of Q heads | **O(B·T²·Hkv·d_k) with Hkv < Hq** | Trade-off between performance and efficiency | Full QKᵀ but with fewer K/V heads (shared across Q groups). | | **SWH** (Sliding Window Attention) | Uses Q/K/V within local window | **O(B·T²·H·d_k)** | Limits attention to nearby tokens; efficient for long sequences | Full QKᵀ per head but only one head output is used per token (top‑1 gating). Still computes all heads. | | **AFT** (Attention-Free Transformer) | No K/V; uses learned positional bias | **O(B·T·D)** | Removes attention entirely; uses element-wise operations and bias terms | Only k_proj, v_proj, elementwise exp/clip, cumsum, division, c_proj. No QKᵀ. | | **LDA** (Linear Diagonal Attention) | Shared Q/K; diagonal-only interaction | **O(B·T·D)** | Lightweight attention using only diagonal of QKᵀ; fast and memory-efficient | Computes only Qᵢ·Kᵢ for each token *i* (no pairwise attention); often gated with sigmoid or swish. | ### 🕸️ Network Variants Complexity Table Neural network is a system of interconnected nodes (called neurons) inspired by the human brain. In language models, these networks process text data by passing it through multiple layers, each transforming the input in increasingly abstract ways. ![Architecture Diagram](material/LittleBaby_network.drawio.svg) | Variant | Complexity | Notes | Details | |--------|------------|------------|------------| | **MLP** (Multilayer Perceptron) | **O(N × D²)** | Dense feedforward layer; all inputs pass through the same network | 1 large expansion projection + 1 down projection + GELU + dropout. | | **MOE** (Mixture of Experts) | **O(K × D²)** (K ≪ N) | Sparse routing to K of N experts; improves parameter-to-compute ratio and scalability | Gating projection + all experts computed every time (dense MOE) → many large projections per forward. | | **LOR** (Low-Rank Adaptation) | **O(N × rD)** where *r* ≪ *D* | Efficient fine-tuning by injecting low-rank matrices into frozen weights | 1 frozen full projection + 2 small low-rank projections (rank ≪ D) + dropout. | | **SWI** (Shifted Window Interaction) | **O(N·w)** where *w* is window size | Local windowed processing with shifted regions; avoids global attention | 2 full projections up to expanded dim + 1 down projection, with swish gating. | | **NFT** (Network Free Transformer) | **O(N × D)** | Attention-free mechanism that converts features into discrete tokens; useful for structured or multimodal data | 3–4 linear projections (q_proj optional) + elementwise ops + cumsum (O(B·T·D)), no QKᵀ, no expansion. | | **LIN** (Linear Instant Network) | **O(N × D)** | Lightweight feedforward alternative; fast and interpretable | 1 linear projection + 1 gating projection (sigmoid or swish) + elementwise product; no expansion, no dropout. | ## 🗄️ Data Sets (TEXT) These are the special learning blocks that help little baby grow smart and curious! [View used datasets](statistics/statistic_datasets.csv) *Keep in mind that these datasets are relatively small in size and lightweight in terms of computational requirements, meaning they can be easily processed and executed on virtually any personal computer without the need for specialized hardware or high-performance systems. ## 🔍 Report Analysis (CPU / GPU) These are the little notes that show how baby is learning and growing every day! [View performed experiments](statistics/statistic_experiments.csv) *Keep in mind that quality should never be assumed without scrutiny, as its evaluation by a larger language model depends on specific criteria. Keep in mind, these models may not consistently produce the same assessment across different runs or contexts. ## 🕵️ Observations While playing and exploring with our tiny language models, we noticed a few adorable quirks and clever behaviors—here are some of the sweet observations we made along the way. - When training when **c_tokenizer** is word instead of chars, then the vocabilary could grow from 100 to 1000 depending on how many different words are into a document and the process time will take longer. - When training if **n_ctx** is increased then the model size will be slightly increased as is part of positional embeddings and total time will also increased. - When training if **n_emb** is increased then the model size will be slightly increased as is part of token embeddings, possitional embedings, nomilization, head and total time are also increased. - When training if **head_size** is increased then the model size will also increased as is part of the blocks into attention and total time are also increased. - When training if **n_layers** is increased then the model size will also increased and total time are also increased, depending on attention selection and network selection they will follow different formula. - When training if **vocab_size** is increased then the tokenizer size will also increased and total time are also increased, this follows linear analogy as any array length has size of vocabilary size. - When finetuning if **vocab_size** is increased then the wpe dimension and lm_head dimension will be increased, therefore the model parameters are slightly increased. - When inference if **infr_cache** is true then generation O(T²) faster as previously sequences do not need to be recalculated each time. - When inference the model with x **max_tokens** for generation, the given prompt should be smaller than n_ctx in order to be processed though the parameter matrixes, but the max_tokens can be more than n_ctx and generate as much content is needed, there would not be an error as in each token prediction n_ctx previous tokens will be used to produce the next token and this can happen unlimited times. It's good practice to not generate more tokens than n_ctx as there will be no good generalization because of earlier context loss. - When inference the model with x **max_tokens** for generation, then: - if the output type is plain text it will have x tokens. - if the output type is json it will have y tokens where y >= x, because it might contains special characters for example, new lines, which in json are represented as two characters "\n" --> "\", "n".
giovannidemuri/mine-qwen2.5-0.5b-instruct
giovannidemuri
2025-09-18T11:39:55Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "chat", "conversational", "en", "arxiv:2407.10671", "base_model:Qwen/Qwen2.5-0.5B", "base_model:finetune:Qwen/Qwen2.5-0.5B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-18T11:39:36Z
--- license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct/blob/main/LICENSE language: - en pipeline_tag: text-generation base_model: Qwen/Qwen2.5-0.5B tags: - chat library_name: transformers --- # Qwen2.5-0.5B-Instruct ## Introduction Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2: - Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains. - Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots. - **Long-context Support** up to 128K tokens and can generate up to 8K tokens. - **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more. **This repo contains the instruction-tuned 0.5B Qwen2.5 model**, which has the following features: - Type: Causal Language Models - Training Stage: Pretraining & Post-training - Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings - Number of Parameters: 0.49B - Number of Paramaters (Non-Embedding): 0.36B - Number of Layers: 24 - Number of Attention Heads (GQA): 14 for Q and 2 for KV - Context Length: Full 32,768 tokens and generation 8192 tokens For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/). ## Requirements The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`. With `transformers<4.37.0`, you will encounter the following error: ``` KeyError: 'qwen2' ``` ## Quickstart Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "Qwen/Qwen2.5-0.5B-Instruct" model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained(model_name) prompt = "Give me a short introduction to large language model." messages = [ {"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate( **model_inputs, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] ``` ## Evaluation & Performance Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5/). For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html). ## Citation If you find our work helpful, feel free to give us a cite. ``` @misc{qwen2.5, title = {Qwen2.5: A Party of Foundation Models}, url = {https://qwenlm.github.io/blog/qwen2.5/}, author = {Qwen Team}, month = {September}, year = {2024} } @article{qwen2, title={Qwen2 Technical Report}, author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan}, journal={arXiv preprint arXiv:2407.10671}, year={2024} } ```
noobmaster6009/Qwen3-0.6B-Gensyn-Swarm-furry_furry_gerbil
noobmaster6009
2025-09-18T11:39:23Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am furry_furry_gerbil", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-18T05:24:27Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am furry_furry_gerbil --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
giovannidemuri/mine-qwen2.5-1.5b-instruct
giovannidemuri
2025-09-18T11:39:20Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "chat", "conversational", "en", "arxiv:2407.10671", "base_model:Qwen/Qwen2.5-1.5B", "base_model:finetune:Qwen/Qwen2.5-1.5B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-18T11:38:16Z
--- license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct/blob/main/LICENSE language: - en pipeline_tag: text-generation base_model: Qwen/Qwen2.5-1.5B tags: - chat library_name: transformers --- # Qwen2.5-1.5B-Instruct ## Introduction Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2: - Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains. - Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots. - **Long-context Support** up to 128K tokens and can generate up to 8K tokens. - **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more. **This repo contains the instruction-tuned 1.5B Qwen2.5 model**, which has the following features: - Type: Causal Language Models - Training Stage: Pretraining & Post-training - Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings - Number of Parameters: 1.54B - Number of Paramaters (Non-Embedding): 1.31B - Number of Layers: 28 - Number of Attention Heads (GQA): 12 for Q and 2 for KV - Context Length: Full 32,768 tokens and generation 8192 tokens For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/). ## Requirements The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`. With `transformers<4.37.0`, you will encounter the following error: ``` KeyError: 'qwen2' ``` ## Quickstart Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "Qwen/Qwen2.5-1.5B-Instruct" model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained(model_name) prompt = "Give me a short introduction to large language model." messages = [ {"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate( **model_inputs, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] ``` ## Evaluation & Performance Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5/). For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html). ## Citation If you find our work helpful, feel free to give us a cite. ``` @misc{qwen2.5, title = {Qwen2.5: A Party of Foundation Models}, url = {https://qwenlm.github.io/blog/qwen2.5/}, author = {Qwen Team}, month = {September}, year = {2024} } @article{qwen2, title={Qwen2 Technical Report}, author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan}, journal={arXiv preprint arXiv:2407.10671}, year={2024} } ```
david4096/dpo-all-MiniLM-L6-v2_concat_e100
david4096
2025-09-18T11:39:01Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "ontology", "on2vec", "graph-neural-networks", "base-all-MiniLM-L6-v2", "general", "general-ontology", "fusion-concat", "gnn-gcn", "medium-ontology", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-09-18T11:38:57Z
--- base_model: all-MiniLM-L6-v2 library_name: sentence-transformers license: apache-2.0 pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - ontology - on2vec - graph-neural-networks - base-all-MiniLM-L6-v2 - general - general-ontology - fusion-concat - gnn-gcn - medium-ontology --- # dpo_all-MiniLM-L6-v2_concat_e100 This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks. ## Model Details - **Base Text Model**: all-MiniLM-L6-v2 - Text Embedding Dimension: 384 - **Ontology**: dpo.owl - **Domain**: general - **Ontology Concepts**: 1,381 - **Concept Alignment**: 1,381/1,381 (100.0%) - **Fusion Method**: concat - **GNN Architecture**: GCN - **Structural Embedding Dimension**: 1381 - **Output Embedding Dimension**: 64 - **Hidden Dimensions**: 128 - **Dropout**: 0.0 - **Training Date**: 2025-09-18 - **on2vec Version**: 0.1.0 - **Source Ontology Size**: 3.5 MB - **Model Size**: 100.6 MB - **Library**: on2vec + sentence-transformers ## Technical Architecture This model uses a multi-stage architecture: 1. **Text Encoding**: Input text is encoded using the base sentence-transformer model 2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships 3. **Fusion Layer**: Simple concatenation of text and ontological embeddings **Embedding Flow:** - Text: 384 dimensions → 128 hidden → 64 output - Structure: 1381 concepts → GNN → 64 output - Fusion: concat → Final embedding ## How It Works This model combines: 1. **Text Embeddings**: Generated using the base sentence-transformer model 2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure 3. **Fusion Layer**: Combines both embedding types using the specified fusion method The ontological knowledge helps the model better understand domain-specific relationships and concepts. ## Usage ```python from sentence_transformers import SentenceTransformer # Load the model model = SentenceTransformer('dpo_all-MiniLM-L6-v2_concat_e100') # Generate embeddings sentences = ['Example sentence 1', 'Example sentence 2'] embeddings = model.encode(sentences) # Compute similarity from sentence_transformers.util import cos_sim similarity = cos_sim(embeddings[0], embeddings[1]) ``` ## Fusion Method: concat Simple concatenation of text and ontology embeddings ## Training Process This model was created using the on2vec pipeline: 1. **Ontology Processing**: The OWL ontology was converted to a graph structure 2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships 3. **Text Integration**: Base model text embeddings were combined with ontological embeddings 4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types ## Intended Use This model is particularly effective for: - General domain text processing - Tasks requiring understanding of domain-specific relationships - Semantic similarity in specialized domains - Classification tasks with domain knowledge requirements ## Limitations - Performance may vary on domains different from the training ontology - Ontological knowledge is limited to concepts present in the source OWL file - May have higher computational requirements than vanilla text models ## Citation If you use this model, please cite the on2vec framework: ```bibtex @software{on2vec, title={on2vec: Ontology Embeddings with Graph Neural Networks}, author={David Steinberg}, url={https://github.com/david4096/on2vec}, year={2024} } ``` --- Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
david4096/dideo-all-MiniLM-L6-v2_concat_e100
david4096
2025-09-18T11:38:40Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "ontology", "on2vec", "graph-neural-networks", "base-all-MiniLM-L6-v2", "general", "general-ontology", "fusion-concat", "gnn-gcn", "small-ontology", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-09-18T11:38:38Z
--- base_model: all-MiniLM-L6-v2 library_name: sentence-transformers license: apache-2.0 pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - ontology - on2vec - graph-neural-networks - base-all-MiniLM-L6-v2 - general - general-ontology - fusion-concat - gnn-gcn - small-ontology --- # dideo_all-MiniLM-L6-v2_concat_e100 This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks. ## Model Details - **Base Text Model**: all-MiniLM-L6-v2 - Text Embedding Dimension: 384 - **Ontology**: dideo.owl - **Domain**: general - **Ontology Concepts**: 416 - **Concept Alignment**: 416/416 (100.0%) - **Fusion Method**: concat - **GNN Architecture**: GCN - **Structural Embedding Dimension**: 416 - **Output Embedding Dimension**: 64 - **Hidden Dimensions**: 128 - **Dropout**: 0.0 - **Training Date**: 2025-09-18 - **on2vec Version**: 0.1.0 - **Source Ontology Size**: 0.9 MB - **Model Size**: 91.5 MB - **Library**: on2vec + sentence-transformers ## Technical Architecture This model uses a multi-stage architecture: 1. **Text Encoding**: Input text is encoded using the base sentence-transformer model 2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships 3. **Fusion Layer**: Simple concatenation of text and ontological embeddings **Embedding Flow:** - Text: 384 dimensions → 128 hidden → 64 output - Structure: 416 concepts → GNN → 64 output - Fusion: concat → Final embedding ## How It Works This model combines: 1. **Text Embeddings**: Generated using the base sentence-transformer model 2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure 3. **Fusion Layer**: Combines both embedding types using the specified fusion method The ontological knowledge helps the model better understand domain-specific relationships and concepts. ## Usage ```python from sentence_transformers import SentenceTransformer # Load the model model = SentenceTransformer('dideo_all-MiniLM-L6-v2_concat_e100') # Generate embeddings sentences = ['Example sentence 1', 'Example sentence 2'] embeddings = model.encode(sentences) # Compute similarity from sentence_transformers.util import cos_sim similarity = cos_sim(embeddings[0], embeddings[1]) ``` ## Fusion Method: concat Simple concatenation of text and ontology embeddings ## Training Process This model was created using the on2vec pipeline: 1. **Ontology Processing**: The OWL ontology was converted to a graph structure 2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships 3. **Text Integration**: Base model text embeddings were combined with ontological embeddings 4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types ## Intended Use This model is particularly effective for: - General domain text processing - Tasks requiring understanding of domain-specific relationships - Semantic similarity in specialized domains - Classification tasks with domain knowledge requirements ## Limitations - Performance may vary on domains different from the training ontology - Ontological knowledge is limited to concepts present in the source OWL file - May have higher computational requirements than vanilla text models ## Citation If you use this model, please cite the on2vec framework: ```bibtex @software{on2vec, title={on2vec: Ontology Embeddings with Graph Neural Networks}, author={David Steinberg}, url={https://github.com/david4096/on2vec}, year={2024} } ``` --- Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
HaniBO/CBC_base
HaniBO
2025-09-18T11:38:39Z
0
0
transformers
[ "transformers", "safetensors", "gguf", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit", "base_model:quantized:unsloth/Phi-3-mini-4k-instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-09-18T11:34:13Z
--- base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - mistral - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** HaniBO - **License:** apache-2.0 - **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
munawwarsultan2017/MentalRisk_DeBERTa
munawwarsultan2017
2025-09-18T11:38:25Z
0
0
null
[ "safetensors", "deberta-v2", "mental-health", "classification", "transformer", "DeBERTa", "text-classification", "en", "base_model:microsoft/deberta-v3-base", "base_model:finetune:microsoft/deberta-v3-base", "license:apache-2.0", "region:us" ]
text-classification
2025-07-07T05:14:14Z
--- language: - en license: apache-2.0 tags: - mental-health - classification - transformer - DeBERTa pipeline_tag: text-classification base_model: microsoft/deberta-v3-base metrics: - accuracy - f1 --- # MentalRisk_DeBERTa **Mental health risk classifier using DeBERTa-v3-base** A fine-tuned DeBERTa-v3-base model for detecting mental health risk in English text. This binary classifier predicts: - `0` = **no risk** - `1` = **risk** --- ## 🧠 Model Description - **Architecture:** microsoft/deberta-v3-base - **Task:** Binary text classification - **Classification Labels:** - **no risk** (0) - **risk** (1) - **Fine-tuned on:** Social media posts annotated for mental health risk --- ## 📦 Usage ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch model_name = "munawwarsultan2017/MentalRisk_DeBERTa" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSequenceClassification.from_pretrained(model_name) text = "I've been feeling hopeless and it's getting harder to get out of bed." inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True, max_length=128) with torch.no_grad(): logits = model(**inputs).logits probs = torch.softmax(logits, dim=1)[0] label = model.config.id2label[probs.argmax().item()] print(f"Label: {label}, Risk prob: {probs[1]:.2f}") ``` --- ## 🔍 Explainability ```python from transformers_interpret import SequenceClassificationExplainer explainer = SequenceClassificationExplainer(model, tokenizer) word_atts = explainer(text) print(word_atts) ``` --- ## ⚠️ Intended Use & Limitations **Use Cases:** * Aid content moderation on social platforms * Flagging potential mental health concerns for review * Research in mental health and NLP **Not for clinical diagnostics.** Predictions should be reviewed by qualified professionals. Performance may vary on different data sources (e.g., outside of Reddit). --- ## 📊 Evaluation Metrics | Metric | Value | |--------------------|---------| | Accuracy | 0.8276 | | AUC | 0.914 | | F1 Score | 0.8499 | | Precision (risk) | 0.7967 | | Recall (risk) | 0.9108 | | Average Precision | 0.9241 | ### Classification Report | Class | Precision | Recall | F1-score | Support | |----------|-----------|--------|----------|---------| | no risk | 0.8767 | 0.7316 | 0.7976 | 272 | | risk | 0.7967 | 0.9108 | 0.8499 | 314 | - **accuracy**: 0.8276 (586 samples) - **macro avg**: Precision 0.8367, Recall 0.8212, F1 0.8238 - **weighted avg**: Precision 0.8338, Recall 0.8276, F1 0.8256 --- ## 🏗️ Training Details * **Base model:** DeBERTa-v3-base * **Epochs:** 8 (early stopping) * **Batch size:** 4 * **Learning rate:** 2e-5 * **Early stopping:** after 3 bad epochs * **Loss function:** Cross-entropy * **Optimizer:** AdamW, scheduler with warmup --- ## 📌 Citation please cite as "https://huggingface.co/munawwarsultan2017/MentalRisk_DeBERTa" --- ## 🤝 Contact & Support If you encounter issues or have questions, feel free to open an issue on the model page or contact the author. --- *This model card follows best practices for transparency and clarity. Adapt as needed!*
david4096/ddpheno-all-MiniLM-L6-v2_concat_e100
david4096
2025-09-18T11:38:24Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "ontology", "on2vec", "graph-neural-networks", "base-all-MiniLM-L6-v2", "general", "general-ontology", "fusion-concat", "gnn-gcn", "medium-ontology", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-09-18T11:38:20Z
--- base_model: all-MiniLM-L6-v2 library_name: sentence-transformers license: apache-2.0 pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - ontology - on2vec - graph-neural-networks - base-all-MiniLM-L6-v2 - general - general-ontology - fusion-concat - gnn-gcn - medium-ontology --- # ddpheno_all-MiniLM-L6-v2_concat_e100 This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks. ## Model Details - **Base Text Model**: all-MiniLM-L6-v2 - Text Embedding Dimension: 384 - **Ontology**: ddpheno.owl - **Domain**: general - **Ontology Concepts**: 1,373 - **Concept Alignment**: 1,373/1,373 (100.0%) - **Fusion Method**: concat - **GNN Architecture**: GCN - **Structural Embedding Dimension**: 1373 - **Output Embedding Dimension**: 64 - **Hidden Dimensions**: 128 - **Dropout**: 0.0 - **Training Date**: 2025-09-18 - **on2vec Version**: 0.1.0 - **Source Ontology Size**: 1.4 MB - **Model Size**: 100.5 MB - **Library**: on2vec + sentence-transformers ## Technical Architecture This model uses a multi-stage architecture: 1. **Text Encoding**: Input text is encoded using the base sentence-transformer model 2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships 3. **Fusion Layer**: Simple concatenation of text and ontological embeddings **Embedding Flow:** - Text: 384 dimensions → 128 hidden → 64 output - Structure: 1373 concepts → GNN → 64 output - Fusion: concat → Final embedding ## How It Works This model combines: 1. **Text Embeddings**: Generated using the base sentence-transformer model 2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure 3. **Fusion Layer**: Combines both embedding types using the specified fusion method The ontological knowledge helps the model better understand domain-specific relationships and concepts. ## Usage ```python from sentence_transformers import SentenceTransformer # Load the model model = SentenceTransformer('ddpheno_all-MiniLM-L6-v2_concat_e100') # Generate embeddings sentences = ['Example sentence 1', 'Example sentence 2'] embeddings = model.encode(sentences) # Compute similarity from sentence_transformers.util import cos_sim similarity = cos_sim(embeddings[0], embeddings[1]) ``` ## Fusion Method: concat Simple concatenation of text and ontology embeddings ## Training Process This model was created using the on2vec pipeline: 1. **Ontology Processing**: The OWL ontology was converted to a graph structure 2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships 3. **Text Integration**: Base model text embeddings were combined with ontological embeddings 4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types ## Intended Use This model is particularly effective for: - General domain text processing - Tasks requiring understanding of domain-specific relationships - Semantic similarity in specialized domains - Classification tasks with domain knowledge requirements ## Limitations - Performance may vary on domains different from the training ontology - Ontological knowledge is limited to concepts present in the source OWL file - May have higher computational requirements than vanilla text models ## Citation If you use this model, please cite the on2vec framework: ```bibtex @software{on2vec, title={on2vec: Ontology Embeddings with Graph Neural Networks}, author={David Steinberg}, url={https://github.com/david4096/on2vec}, year={2024} } ``` --- Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
david4096/cido-all-MiniLM-L6-v2_concat_e100
david4096
2025-09-18T11:38:02Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "ontology", "on2vec", "graph-neural-networks", "base-all-MiniLM-L6-v2", "general", "general-ontology", "fusion-concat", "gnn-gcn", "large-ontology", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-09-18T11:37:35Z
--- base_model: all-MiniLM-L6-v2 library_name: sentence-transformers license: apache-2.0 pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - ontology - on2vec - graph-neural-networks - base-all-MiniLM-L6-v2 - general - general-ontology - fusion-concat - gnn-gcn - large-ontology --- # cido_all-MiniLM-L6-v2_concat_e100 This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks. ## Model Details - **Base Text Model**: all-MiniLM-L6-v2 - Text Embedding Dimension: 384 - **Ontology**: cido.owl - **Domain**: general - **Ontology Concepts**: 31,924 - **Concept Alignment**: 31,924/31,924 (100.0%) - **Fusion Method**: concat - **GNN Architecture**: GCN - **Structural Embedding Dimension**: 31924 - **Output Embedding Dimension**: 64 - **Hidden Dimensions**: 128 - **Dropout**: 0.0 - **Training Date**: 2025-09-18 - **on2vec Version**: 0.1.0 - **Source Ontology Size**: 44.8 MB - **Model Size**: 387.8 MB - **Library**: on2vec + sentence-transformers ## Technical Architecture This model uses a multi-stage architecture: 1. **Text Encoding**: Input text is encoded using the base sentence-transformer model 2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships 3. **Fusion Layer**: Simple concatenation of text and ontological embeddings **Embedding Flow:** - Text: 384 dimensions → 128 hidden → 64 output - Structure: 31924 concepts → GNN → 64 output - Fusion: concat → Final embedding ## How It Works This model combines: 1. **Text Embeddings**: Generated using the base sentence-transformer model 2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure 3. **Fusion Layer**: Combines both embedding types using the specified fusion method The ontological knowledge helps the model better understand domain-specific relationships and concepts. ## Usage ```python from sentence_transformers import SentenceTransformer # Load the model model = SentenceTransformer('cido_all-MiniLM-L6-v2_concat_e100') # Generate embeddings sentences = ['Example sentence 1', 'Example sentence 2'] embeddings = model.encode(sentences) # Compute similarity from sentence_transformers.util import cos_sim similarity = cos_sim(embeddings[0], embeddings[1]) ``` ## Fusion Method: concat Simple concatenation of text and ontology embeddings ## Training Process This model was created using the on2vec pipeline: 1. **Ontology Processing**: The OWL ontology was converted to a graph structure 2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships 3. **Text Integration**: Base model text embeddings were combined with ontological embeddings 4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types ## Intended Use This model is particularly effective for: - General domain text processing - Tasks requiring understanding of domain-specific relationships - Semantic similarity in specialized domains - Classification tasks with domain knowledge requirements ## Limitations - Performance may vary on domains different from the training ontology - Ontological knowledge is limited to concepts present in the source OWL file - May have higher computational requirements than vanilla text models ## Citation If you use this model, please cite the on2vec framework: ```bibtex @software{on2vec, title={on2vec: Ontology Embeddings with Graph Neural Networks}, author={David Steinberg}, url={https://github.com/david4096/on2vec}, year={2024} } ``` --- Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
ataur09/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-whiskered_swift_capybara
ataur09
2025-09-18T11:37:56Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am whiskered_swift_capybara", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-18T11:37:01Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am whiskered_swift_capybara --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
tyanfarm/llama3-8b-hotelfaq-finetuned
tyanfarm
2025-09-18T11:37:01Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-18T11:36:42Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
BKM1804/effa04df-9f32-4e5c-ae19-b91ae85d51b5-new
BKM1804
2025-09-18T11:36:14Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-18T11:36:02Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Humanlearning/ppo-LunarLander-v3
Humanlearning
2025-09-18T11:33:49Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v3", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-09-18T10:53:54Z
--- library_name: stable-baselines3 tags: - LunarLander-v3 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v3 type: LunarLander-v3 metrics: - type: mean_reward value: 254.87 +/- 15.09 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v3** This is a trained model of a **PPO** agent playing **LunarLander-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
schooncestiaa/blockassist-bc-scruffy_webbed_dragonfly_1758195130
schooncestiaa
2025-09-18T11:33:32Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "scruffy webbed dragonfly", "arxiv:2504.07091", "region:us" ]
null
2025-09-18T11:33:26Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - scruffy webbed dragonfly --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
david4096/chiro-all-MiniLM-L6-v2_concat_e100
david4096
2025-09-18T11:33:31Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "ontology", "on2vec", "graph-neural-networks", "base-all-MiniLM-L6-v2", "general", "general-ontology", "fusion-concat", "gnn-gcn", "small-ontology", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-09-18T11:33:28Z
--- base_model: all-MiniLM-L6-v2 library_name: sentence-transformers license: apache-2.0 pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - ontology - on2vec - graph-neural-networks - base-all-MiniLM-L6-v2 - general - general-ontology - fusion-concat - gnn-gcn - small-ontology --- # chiro_all-MiniLM-L6-v2_concat_e100 This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks. ## Model Details - **Base Text Model**: all-MiniLM-L6-v2 - Text Embedding Dimension: 384 - **Ontology**: chiro.owl - **Domain**: general - **Ontology Concepts**: 26 - **Concept Alignment**: 26/26 (100.0%) - **Fusion Method**: concat - **GNN Architecture**: GCN - **Structural Embedding Dimension**: 26 - **Output Embedding Dimension**: 64 - **Hidden Dimensions**: 128 - **Dropout**: 0.0 - **Training Date**: 2025-09-18 - **on2vec Version**: 0.1.0 - **Source Ontology Size**: 0.2 MB - **Model Size**: 87.8 MB - **Library**: on2vec + sentence-transformers ## Technical Architecture This model uses a multi-stage architecture: 1. **Text Encoding**: Input text is encoded using the base sentence-transformer model 2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships 3. **Fusion Layer**: Simple concatenation of text and ontological embeddings **Embedding Flow:** - Text: 384 dimensions → 128 hidden → 64 output - Structure: 26 concepts → GNN → 64 output - Fusion: concat → Final embedding ## How It Works This model combines: 1. **Text Embeddings**: Generated using the base sentence-transformer model 2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure 3. **Fusion Layer**: Combines both embedding types using the specified fusion method The ontological knowledge helps the model better understand domain-specific relationships and concepts. ## Usage ```python from sentence_transformers import SentenceTransformer # Load the model model = SentenceTransformer('chiro_all-MiniLM-L6-v2_concat_e100') # Generate embeddings sentences = ['Example sentence 1', 'Example sentence 2'] embeddings = model.encode(sentences) # Compute similarity from sentence_transformers.util import cos_sim similarity = cos_sim(embeddings[0], embeddings[1]) ``` ## Fusion Method: concat Simple concatenation of text and ontology embeddings ## Training Process This model was created using the on2vec pipeline: 1. **Ontology Processing**: The OWL ontology was converted to a graph structure 2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships 3. **Text Integration**: Base model text embeddings were combined with ontological embeddings 4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types ## Intended Use This model is particularly effective for: - General domain text processing - Tasks requiring understanding of domain-specific relationships - Semantic similarity in specialized domains - Classification tasks with domain knowledge requirements ## Limitations - Performance may vary on domains different from the training ontology - Ontological knowledge is limited to concepts present in the source OWL file - May have higher computational requirements than vanilla text models ## Citation If you use this model, please cite the on2vec framework: ```bibtex @software{on2vec, title={on2vec: Ontology Embeddings with Graph Neural Networks}, author={David Steinberg}, url={https://github.com/david4096/on2vec}, year={2024} } ``` --- Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
david4096/ceph-all-MiniLM-L6-v2_concat_e100
david4096
2025-09-18T11:33:28Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "ontology", "on2vec", "graph-neural-networks", "base-all-MiniLM-L6-v2", "general", "general-ontology", "fusion-concat", "gnn-gcn", "small-ontology", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-09-18T11:33:25Z
--- base_model: all-MiniLM-L6-v2 library_name: sentence-transformers license: apache-2.0 pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - ontology - on2vec - graph-neural-networks - base-all-MiniLM-L6-v2 - general - general-ontology - fusion-concat - gnn-gcn - small-ontology --- # ceph_all-MiniLM-L6-v2_concat_e100 This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks. ## Model Details - **Base Text Model**: all-MiniLM-L6-v2 - Text Embedding Dimension: 384 - **Ontology**: ceph.owl - **Domain**: general - **Ontology Concepts**: 330 - **Concept Alignment**: 330/330 (100.0%) - **Fusion Method**: concat - **GNN Architecture**: GCN - **Structural Embedding Dimension**: 330 - **Output Embedding Dimension**: 64 - **Hidden Dimensions**: 128 - **Dropout**: 0.0 - **Training Date**: 2025-09-18 - **on2vec Version**: 0.1.0 - **Source Ontology Size**: 0.6 MB - **Model Size**: 90.7 MB - **Library**: on2vec + sentence-transformers ## Technical Architecture This model uses a multi-stage architecture: 1. **Text Encoding**: Input text is encoded using the base sentence-transformer model 2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships 3. **Fusion Layer**: Simple concatenation of text and ontological embeddings **Embedding Flow:** - Text: 384 dimensions → 128 hidden → 64 output - Structure: 330 concepts → GNN → 64 output - Fusion: concat → Final embedding ## How It Works This model combines: 1. **Text Embeddings**: Generated using the base sentence-transformer model 2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure 3. **Fusion Layer**: Combines both embedding types using the specified fusion method The ontological knowledge helps the model better understand domain-specific relationships and concepts. ## Usage ```python from sentence_transformers import SentenceTransformer # Load the model model = SentenceTransformer('ceph_all-MiniLM-L6-v2_concat_e100') # Generate embeddings sentences = ['Example sentence 1', 'Example sentence 2'] embeddings = model.encode(sentences) # Compute similarity from sentence_transformers.util import cos_sim similarity = cos_sim(embeddings[0], embeddings[1]) ``` ## Fusion Method: concat Simple concatenation of text and ontology embeddings ## Training Process This model was created using the on2vec pipeline: 1. **Ontology Processing**: The OWL ontology was converted to a graph structure 2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships 3. **Text Integration**: Base model text embeddings were combined with ontological embeddings 4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types ## Intended Use This model is particularly effective for: - General domain text processing - Tasks requiring understanding of domain-specific relationships - Semantic similarity in specialized domains - Classification tasks with domain knowledge requirements ## Limitations - Performance may vary on domains different from the training ontology - Ontological knowledge is limited to concepts present in the source OWL file - May have higher computational requirements than vanilla text models ## Citation If you use this model, please cite the on2vec framework: ```bibtex @software{on2vec, title={on2vec: Ontology Embeddings with Graph Neural Networks}, author={David Steinberg}, url={https://github.com/david4096/on2vec}, year={2024} } ``` --- Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
tomal66/qwen3-0.6b-sarcasm-sft
tomal66
2025-09-18T11:33:22Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-18T11:33:13Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
david4096/apo-all-MiniLM-L6-v2_concat_e100
david4096
2025-09-18T11:32:30Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "ontology", "on2vec", "graph-neural-networks", "base-all-MiniLM-L6-v2", "general", "general-ontology", "fusion-concat", "gnn-gcn", "small-ontology", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-09-18T11:32:27Z
--- base_model: all-MiniLM-L6-v2 library_name: sentence-transformers license: apache-2.0 pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - ontology - on2vec - graph-neural-networks - base-all-MiniLM-L6-v2 - general - general-ontology - fusion-concat - gnn-gcn - small-ontology --- # apo_all-MiniLM-L6-v2_concat_e100 This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks. ## Model Details - **Base Text Model**: all-MiniLM-L6-v2 - Text Embedding Dimension: 384 - **Ontology**: apo.owl - **Domain**: general - **Ontology Concepts**: 619 - **Concept Alignment**: 619/619 (100.0%) - **Fusion Method**: concat - **GNN Architecture**: GCN - **Structural Embedding Dimension**: 619 - **Output Embedding Dimension**: 64 - **Hidden Dimensions**: 128 - **Dropout**: 0.0 - **Training Date**: 2025-09-18 - **on2vec Version**: 0.1.0 - **Source Ontology Size**: 0.7 MB - **Model Size**: 93.4 MB - **Library**: on2vec + sentence-transformers ## Technical Architecture This model uses a multi-stage architecture: 1. **Text Encoding**: Input text is encoded using the base sentence-transformer model 2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships 3. **Fusion Layer**: Simple concatenation of text and ontological embeddings **Embedding Flow:** - Text: 384 dimensions → 128 hidden → 64 output - Structure: 619 concepts → GNN → 64 output - Fusion: concat → Final embedding ## How It Works This model combines: 1. **Text Embeddings**: Generated using the base sentence-transformer model 2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure 3. **Fusion Layer**: Combines both embedding types using the specified fusion method The ontological knowledge helps the model better understand domain-specific relationships and concepts. ## Usage ```python from sentence_transformers import SentenceTransformer # Load the model model = SentenceTransformer('apo_all-MiniLM-L6-v2_concat_e100') # Generate embeddings sentences = ['Example sentence 1', 'Example sentence 2'] embeddings = model.encode(sentences) # Compute similarity from sentence_transformers.util import cos_sim similarity = cos_sim(embeddings[0], embeddings[1]) ``` ## Fusion Method: concat Simple concatenation of text and ontology embeddings ## Training Process This model was created using the on2vec pipeline: 1. **Ontology Processing**: The OWL ontology was converted to a graph structure 2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships 3. **Text Integration**: Base model text embeddings were combined with ontological embeddings 4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types ## Intended Use This model is particularly effective for: - General domain text processing - Tasks requiring understanding of domain-specific relationships - Semantic similarity in specialized domains - Classification tasks with domain knowledge requirements ## Limitations - Performance may vary on domains different from the training ontology - Ontological knowledge is limited to concepts present in the source OWL file - May have higher computational requirements than vanilla text models ## Citation If you use this model, please cite the on2vec framework: ```bibtex @software{on2vec, title={on2vec: Ontology Embeddings with Graph Neural Networks}, author={David Steinberg}, url={https://github.com/david4096/on2vec}, year={2024} } ``` --- Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
david4096/amphx-all-MiniLM-L6-v2_concat_e100
david4096
2025-09-18T11:32:13Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "ontology", "on2vec", "graph-neural-networks", "base-all-MiniLM-L6-v2", "general", "general-ontology", "fusion-concat", "gnn-gcn", "small-ontology", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-09-18T11:32:11Z
--- base_model: all-MiniLM-L6-v2 library_name: sentence-transformers license: apache-2.0 pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - ontology - on2vec - graph-neural-networks - base-all-MiniLM-L6-v2 - general - general-ontology - fusion-concat - gnn-gcn - small-ontology --- # amphx_all-MiniLM-L6-v2_concat_e100 This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks. ## Model Details - **Base Text Model**: all-MiniLM-L6-v2 - Text Embedding Dimension: 384 - **Ontology**: amphx.owl - **Domain**: general - **Ontology Concepts**: 403 - **Concept Alignment**: 403/403 (100.0%) - **Fusion Method**: concat - **GNN Architecture**: GCN - **Structural Embedding Dimension**: 403 - **Output Embedding Dimension**: 64 - **Hidden Dimensions**: 128 - **Dropout**: 0.0 - **Training Date**: 2025-09-18 - **on2vec Version**: 0.1.0 - **Source Ontology Size**: 0.6 MB - **Model Size**: 91.4 MB - **Library**: on2vec + sentence-transformers ## Technical Architecture This model uses a multi-stage architecture: 1. **Text Encoding**: Input text is encoded using the base sentence-transformer model 2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships 3. **Fusion Layer**: Simple concatenation of text and ontological embeddings **Embedding Flow:** - Text: 384 dimensions → 128 hidden → 64 output - Structure: 403 concepts → GNN → 64 output - Fusion: concat → Final embedding ## How It Works This model combines: 1. **Text Embeddings**: Generated using the base sentence-transformer model 2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure 3. **Fusion Layer**: Combines both embedding types using the specified fusion method The ontological knowledge helps the model better understand domain-specific relationships and concepts. ## Usage ```python from sentence_transformers import SentenceTransformer # Load the model model = SentenceTransformer('amphx_all-MiniLM-L6-v2_concat_e100') # Generate embeddings sentences = ['Example sentence 1', 'Example sentence 2'] embeddings = model.encode(sentences) # Compute similarity from sentence_transformers.util import cos_sim similarity = cos_sim(embeddings[0], embeddings[1]) ``` ## Fusion Method: concat Simple concatenation of text and ontology embeddings ## Training Process This model was created using the on2vec pipeline: 1. **Ontology Processing**: The OWL ontology was converted to a graph structure 2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships 3. **Text Integration**: Base model text embeddings were combined with ontological embeddings 4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types ## Intended Use This model is particularly effective for: - General domain text processing - Tasks requiring understanding of domain-specific relationships - Semantic similarity in specialized domains - Classification tasks with domain knowledge requirements ## Limitations - Performance may vary on domains different from the training ontology - Ontological knowledge is limited to concepts present in the source OWL file - May have higher computational requirements than vanilla text models ## Citation If you use this model, please cite the on2vec framework: ```bibtex @software{on2vec, title={on2vec: Ontology Embeddings with Graph Neural Networks}, author={David Steinberg}, url={https://github.com/david4096/on2vec}, year={2024} } ``` --- Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
david4096/agro-all-MiniLM-L6-v2_concat_e100
david4096
2025-09-18T11:31:54Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "ontology", "on2vec", "graph-neural-networks", "base-all-MiniLM-L6-v2", "general", "general-ontology", "fusion-concat", "gnn-gcn", "medium-ontology", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-09-18T11:31:49Z
--- base_model: all-MiniLM-L6-v2 library_name: sentence-transformers license: apache-2.0 pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - ontology - on2vec - graph-neural-networks - base-all-MiniLM-L6-v2 - general - general-ontology - fusion-concat - gnn-gcn - medium-ontology --- # agro_all-MiniLM-L6-v2_concat_e100 This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks. ## Model Details - **Base Text Model**: all-MiniLM-L6-v2 - Text Embedding Dimension: 384 - **Ontology**: agro.owl - **Domain**: general - **Ontology Concepts**: 4,162 - **Concept Alignment**: 4,162/4,162 (100.0%) - **Fusion Method**: concat - **GNN Architecture**: GCN - **Structural Embedding Dimension**: 4162 - **Output Embedding Dimension**: 64 - **Hidden Dimensions**: 128 - **Dropout**: 0.0 - **Training Date**: 2025-09-18 - **on2vec Version**: 0.1.0 - **Source Ontology Size**: 7.2 MB - **Model Size**: 126.8 MB - **Library**: on2vec + sentence-transformers ## Technical Architecture This model uses a multi-stage architecture: 1. **Text Encoding**: Input text is encoded using the base sentence-transformer model 2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships 3. **Fusion Layer**: Simple concatenation of text and ontological embeddings **Embedding Flow:** - Text: 384 dimensions → 128 hidden → 64 output - Structure: 4162 concepts → GNN → 64 output - Fusion: concat → Final embedding ## How It Works This model combines: 1. **Text Embeddings**: Generated using the base sentence-transformer model 2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure 3. **Fusion Layer**: Combines both embedding types using the specified fusion method The ontological knowledge helps the model better understand domain-specific relationships and concepts. ## Usage ```python from sentence_transformers import SentenceTransformer # Load the model model = SentenceTransformer('agro_all-MiniLM-L6-v2_concat_e100') # Generate embeddings sentences = ['Example sentence 1', 'Example sentence 2'] embeddings = model.encode(sentences) # Compute similarity from sentence_transformers.util import cos_sim similarity = cos_sim(embeddings[0], embeddings[1]) ``` ## Fusion Method: concat Simple concatenation of text and ontology embeddings ## Training Process This model was created using the on2vec pipeline: 1. **Ontology Processing**: The OWL ontology was converted to a graph structure 2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships 3. **Text Integration**: Base model text embeddings were combined with ontological embeddings 4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types ## Intended Use This model is particularly effective for: - General domain text processing - Tasks requiring understanding of domain-specific relationships - Semantic similarity in specialized domains - Classification tasks with domain knowledge requirements ## Limitations - Performance may vary on domains different from the training ontology - Ontological knowledge is limited to concepts present in the source OWL file - May have higher computational requirements than vanilla text models ## Citation If you use this model, please cite the on2vec framework: ```bibtex @software{on2vec, title={on2vec: Ontology Embeddings with Graph Neural Networks}, author={David Steinberg}, url={https://github.com/david4096/on2vec}, year={2024} } ``` --- Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
david4096/EDAM-all-MiniLM-L6-v2_concat_e100
david4096
2025-09-18T11:31:26Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "ontology", "on2vec", "graph-neural-networks", "base-all-MiniLM-L6-v2", "biomedical", "biomedical-ontology", "fusion-concat", "gnn-gcn", "medium-ontology", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-09-18T11:31:20Z
--- base_model: all-MiniLM-L6-v2 library_name: sentence-transformers license: apache-2.0 pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - ontology - on2vec - graph-neural-networks - base-all-MiniLM-L6-v2 - biomedical - biomedical-ontology - fusion-concat - gnn-gcn - medium-ontology --- # EDAM_all-MiniLM-L6-v2_concat_e100 This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks. ## Model Details - **Base Text Model**: all-MiniLM-L6-v2 - Text Embedding Dimension: 384 - **Ontology**: EDAM.owl - **Domain**: biomedical - **Ontology Concepts**: 3,511 - **Concept Alignment**: 3,511/3,511 (100.0%) - **Fusion Method**: concat - **GNN Architecture**: GCN - **Structural Embedding Dimension**: 3511 - **Output Embedding Dimension**: 64 - **Hidden Dimensions**: 128 - **Dropout**: 0.0 - **Training Date**: 2025-09-18 - **on2vec Version**: 0.1.0 - **Source Ontology Size**: 3.2 MB - **Model Size**: 120.7 MB - **Library**: on2vec + sentence-transformers ## Technical Architecture This model uses a multi-stage architecture: 1. **Text Encoding**: Input text is encoded using the base sentence-transformer model 2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships 3. **Fusion Layer**: Simple concatenation of text and ontological embeddings **Embedding Flow:** - Text: 384 dimensions → 128 hidden → 64 output - Structure: 3511 concepts → GNN → 64 output - Fusion: concat → Final embedding ## How It Works This model combines: 1. **Text Embeddings**: Generated using the base sentence-transformer model 2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure 3. **Fusion Layer**: Combines both embedding types using the specified fusion method The ontological knowledge helps the model better understand domain-specific relationships and concepts. ## Usage ```python from sentence_transformers import SentenceTransformer # Load the model model = SentenceTransformer('EDAM_all-MiniLM-L6-v2_concat_e100') # Generate embeddings sentences = ['Example sentence 1', 'Example sentence 2'] embeddings = model.encode(sentences) # Compute similarity from sentence_transformers.util import cos_sim similarity = cos_sim(embeddings[0], embeddings[1]) ``` ## Fusion Method: concat Simple concatenation of text and ontology embeddings ## Training Process This model was created using the on2vec pipeline: 1. **Ontology Processing**: The OWL ontology was converted to a graph structure 2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships 3. **Text Integration**: Base model text embeddings were combined with ontological embeddings 4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types ## Intended Use This model is particularly effective for: - Biomedical domain text processing - Tasks requiring understanding of domain-specific relationships - Semantic similarity in specialized domains - Classification tasks with domain knowledge requirements ## Limitations - Performance may vary on domains different from the training ontology - Ontological knowledge is limited to concepts present in the source OWL file - May have higher computational requirements than vanilla text models ## Citation If you use this model, please cite the on2vec framework: ```bibtex @software{on2vec, title={on2vec: Ontology Embeddings with Graph Neural Networks}, author={David Steinberg}, url={https://github.com/david4096/on2vec}, year={2024} } ``` --- Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
david4096/ado-all-MiniLM-L6-v2_concat_e100
david4096
2025-09-18T11:31:25Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "ontology", "on2vec", "graph-neural-networks", "base-all-MiniLM-L6-v2", "general", "general-ontology", "fusion-concat", "gnn-gcn", "medium-ontology", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-09-18T11:31:20Z
--- base_model: all-MiniLM-L6-v2 library_name: sentence-transformers license: apache-2.0 pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - ontology - on2vec - graph-neural-networks - base-all-MiniLM-L6-v2 - general - general-ontology - fusion-concat - gnn-gcn - medium-ontology --- # ado_all-MiniLM-L6-v2_concat_e100 This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks. ## Model Details - **Base Text Model**: all-MiniLM-L6-v2 - Text Embedding Dimension: 384 - **Ontology**: ado.owl - **Domain**: general - **Ontology Concepts**: 1,963 - **Concept Alignment**: 1,963/1,963 (100.0%) - **Fusion Method**: concat - **GNN Architecture**: GCN - **Structural Embedding Dimension**: 1963 - **Output Embedding Dimension**: 64 - **Hidden Dimensions**: 128 - **Dropout**: 0.0 - **Training Date**: 2025-09-18 - **on2vec Version**: 0.1.0 - **Source Ontology Size**: 5.2 MB - **Model Size**: 106.1 MB - **Library**: on2vec + sentence-transformers ## Technical Architecture This model uses a multi-stage architecture: 1. **Text Encoding**: Input text is encoded using the base sentence-transformer model 2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships 3. **Fusion Layer**: Simple concatenation of text and ontological embeddings **Embedding Flow:** - Text: 384 dimensions → 128 hidden → 64 output - Structure: 1963 concepts → GNN → 64 output - Fusion: concat → Final embedding ## How It Works This model combines: 1. **Text Embeddings**: Generated using the base sentence-transformer model 2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure 3. **Fusion Layer**: Combines both embedding types using the specified fusion method The ontological knowledge helps the model better understand domain-specific relationships and concepts. ## Usage ```python from sentence_transformers import SentenceTransformer # Load the model model = SentenceTransformer('ado_all-MiniLM-L6-v2_concat_e100') # Generate embeddings sentences = ['Example sentence 1', 'Example sentence 2'] embeddings = model.encode(sentences) # Compute similarity from sentence_transformers.util import cos_sim similarity = cos_sim(embeddings[0], embeddings[1]) ``` ## Fusion Method: concat Simple concatenation of text and ontology embeddings ## Training Process This model was created using the on2vec pipeline: 1. **Ontology Processing**: The OWL ontology was converted to a graph structure 2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships 3. **Text Integration**: Base model text embeddings were combined with ontological embeddings 4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types ## Intended Use This model is particularly effective for: - General domain text processing - Tasks requiring understanding of domain-specific relationships - Semantic similarity in specialized domains - Classification tasks with domain knowledge requirements ## Limitations - Performance may vary on domains different from the training ontology - Ontological knowledge is limited to concepts present in the source OWL file - May have higher computational requirements than vanilla text models ## Citation If you use this model, please cite the on2vec framework: ```bibtex @software{on2vec, title={on2vec: Ontology Embeddings with Graph Neural Networks}, author={David Steinberg}, url={https://github.com/david4096/on2vec}, year={2024} } ``` --- Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
Karthikappi0011/qwen3-14b-finetune-raw-convo-mix
Karthikappi0011
2025-09-18T11:29:24Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen3", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-09-18T10:04:44Z
--- base_model: unsloth/qwen3-14b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen3 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Karthikappi0011 - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen3-14b-unsloth-bnb-4bit This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
mradermacher/Smoothie-Qwen3-1.7B-symptome-disease-GGUF
mradermacher
2025-09-18T11:27:36Z
0
0
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "qwen3", "en", "base_model:great1123/Smoothie-Qwen3-1.7B-symptome-disease", "base_model:quantized:great1123/Smoothie-Qwen3-1.7B-symptome-disease", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-09-18T11:05:52Z
--- base_model: great1123/Smoothie-Qwen3-1.7B-symptome-disease language: - en library_name: transformers license: apache-2.0 mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - text-generation-inference - transformers - unsloth - qwen3 --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/great1123/Smoothie-Qwen3-1.7B-symptome-disease <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Smoothie-Qwen3-1.7B-symptome-disease-GGUF).*** weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen3-1.7B-symptome-disease-GGUF/resolve/main/Smoothie-Qwen3-1.7B-symptome-disease.Q2_K.gguf) | Q2_K | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen3-1.7B-symptome-disease-GGUF/resolve/main/Smoothie-Qwen3-1.7B-symptome-disease.Q3_K_S.gguf) | Q3_K_S | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen3-1.7B-symptome-disease-GGUF/resolve/main/Smoothie-Qwen3-1.7B-symptome-disease.Q3_K_M.gguf) | Q3_K_M | 1.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen3-1.7B-symptome-disease-GGUF/resolve/main/Smoothie-Qwen3-1.7B-symptome-disease.Q3_K_L.gguf) | Q3_K_L | 1.1 | | | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen3-1.7B-symptome-disease-GGUF/resolve/main/Smoothie-Qwen3-1.7B-symptome-disease.IQ4_XS.gguf) | IQ4_XS | 1.1 | | | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen3-1.7B-symptome-disease-GGUF/resolve/main/Smoothie-Qwen3-1.7B-symptome-disease.Q4_K_S.gguf) | Q4_K_S | 1.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen3-1.7B-symptome-disease-GGUF/resolve/main/Smoothie-Qwen3-1.7B-symptome-disease.Q4_K_M.gguf) | Q4_K_M | 1.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen3-1.7B-symptome-disease-GGUF/resolve/main/Smoothie-Qwen3-1.7B-symptome-disease.Q5_K_S.gguf) | Q5_K_S | 1.3 | | | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen3-1.7B-symptome-disease-GGUF/resolve/main/Smoothie-Qwen3-1.7B-symptome-disease.Q5_K_M.gguf) | Q5_K_M | 1.4 | | | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen3-1.7B-symptome-disease-GGUF/resolve/main/Smoothie-Qwen3-1.7B-symptome-disease.Q6_K.gguf) | Q6_K | 1.5 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen3-1.7B-symptome-disease-GGUF/resolve/main/Smoothie-Qwen3-1.7B-symptome-disease.Q8_0.gguf) | Q8_0 | 1.9 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen3-1.7B-symptome-disease-GGUF/resolve/main/Smoothie-Qwen3-1.7B-symptome-disease.f16.gguf) | f16 | 3.5 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
thibaultmaho/medgemma-4b-it-sft-lora-crc100k-1500subset-max448
thibaultmaho
2025-09-18T11:25:41Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:google/medgemma-4b-it", "base_model:finetune:google/medgemma-4b-it", "endpoints_compatible", "region:us" ]
null
2025-09-17T13:40:45Z
--- base_model: google/medgemma-4b-it library_name: transformers model_name: medgemma-4b-it-sft-lora-crc100k-1500subset-max448 tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for medgemma-4b-it-sft-lora-crc100k-1500subset-max448 This model is a fine-tuned version of [google/medgemma-4b-it](https://huggingface.co/google/medgemma-4b-it). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="thibaultmaho/medgemma-4b-it-sft-lora-crc100k-1500subset-max448", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.23.0 - Transformers: 4.56.1 - Pytorch: 2.6.0 - Datasets: 4.1.0 - Tokenizers: 0.22.0 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
gumperto/Qwen2.5-1.5B-Instruct-emergent-finetune-tests_samples-down-l14-r1
gumperto
2025-09-18T11:22:46Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "sft", "trl", "unsloth", "conversational", "base_model:unsloth/Qwen2.5-1.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-1.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-18T11:05:16Z
--- base_model: unsloth/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-emergent-finetune-tests_samples-down-l14-r1 tags: - generated_from_trainer - sft - trl - unsloth licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-emergent-finetune-tests_samples-down-l14-r1 This model is a fine-tuned version of [unsloth/Qwen2.5-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="gumperto/Qwen2.5-1.5B-Instruct-emergent-finetune-tests_samples-down-l14-r1", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/gumperto-waseda-university/clarifying-em/runs/t5ry435y) This model was trained with SFT. ### Framework versions - TRL: 0.24.0.dev0 - Transformers: 4.56.1 - Pytorch: 2.8.0 - Datasets: 4.1.0 - Tokenizers: 0.22.0 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
david4096/bfo-all-MiniLM-L6-v2_attention_e256
david4096
2025-09-18T11:22:29Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "ontology", "on2vec", "graph-neural-networks", "base-all-MiniLM-L6-v2", "general", "general-ontology", "fusion-attention", "gnn-gcn", "small-ontology", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-09-18T11:22:25Z
--- base_model: all-MiniLM-L6-v2 library_name: sentence-transformers license: apache-2.0 pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - ontology - on2vec - graph-neural-networks - base-all-MiniLM-L6-v2 - general - general-ontology - fusion-attention - gnn-gcn - small-ontology --- # bfo_all-MiniLM-L6-v2_attention_e256 This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks. ## Model Details - **Base Text Model**: all-MiniLM-L6-v2 - Text Embedding Dimension: 384 - **Ontology**: bfo.owl - **Domain**: general - **Ontology Concepts**: 35 - **Concept Alignment**: 35/35 (100.0%) - **Fusion Method**: attention - **GNN Architecture**: GCN - **Structural Embedding Dimension**: 35 - **Output Embedding Dimension**: 64 - **Hidden Dimensions**: 128 - **Dropout**: 0.0 - **Training Date**: 2025-09-18 - **on2vec Version**: 0.1.0 - **Source Ontology Size**: 0.2 MB - **Model Size**: 91.4 MB - **Library**: on2vec + sentence-transformers ## Technical Architecture This model uses a multi-stage architecture: 1. **Text Encoding**: Input text is encoded using the base sentence-transformer model 2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships 3. **Fusion Layer**: Attention mechanism learns to weight text vs ontological information **Embedding Flow:** - Text: 384 dimensions → 128 hidden → 64 output - Structure: 35 concepts → GNN → 64 output - Fusion: attention → Final embedding ## How It Works This model combines: 1. **Text Embeddings**: Generated using the base sentence-transformer model 2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure 3. **Fusion Layer**: Combines both embedding types using the specified fusion method The ontological knowledge helps the model better understand domain-specific relationships and concepts. ## Usage ```python from sentence_transformers import SentenceTransformer # Load the model model = SentenceTransformer('bfo_all-MiniLM-L6-v2_attention_e256') # Generate embeddings sentences = ['Example sentence 1', 'Example sentence 2'] embeddings = model.encode(sentences) # Compute similarity from sentence_transformers.util import cos_sim similarity = cos_sim(embeddings[0], embeddings[1]) ``` ## Fusion Method: attention Attention-based fusion that learns to focus on relevant embedding components ## Training Process This model was created using the on2vec pipeline: 1. **Ontology Processing**: The OWL ontology was converted to a graph structure 2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships 3. **Text Integration**: Base model text embeddings were combined with ontological embeddings 4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types ## Intended Use This model is particularly effective for: - General domain text processing - Tasks requiring understanding of domain-specific relationships - Semantic similarity in specialized domains - Classification tasks with domain knowledge requirements ## Limitations - Performance may vary on domains different from the training ontology - Ontological knowledge is limited to concepts present in the source OWL file - May have higher computational requirements than vanilla text models ## Citation If you use this model, please cite the on2vec framework: ```bibtex @software{on2vec, title={on2vec: Ontology Embeddings with Graph Neural Networks}, author={Your Name}, url={https://github.com/david4096/on2vec}, year={2024} } ``` --- Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
david4096/apo-all-MiniLM-L6-v2_attention_e256
david4096
2025-09-18T11:21:48Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "ontology", "on2vec", "graph-neural-networks", "base-all-MiniLM-L6-v2", "general", "general-ontology", "fusion-attention", "gnn-gcn", "small-ontology", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-09-18T11:21:45Z
--- base_model: all-MiniLM-L6-v2 library_name: sentence-transformers license: apache-2.0 pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - ontology - on2vec - graph-neural-networks - base-all-MiniLM-L6-v2 - general - general-ontology - fusion-attention - gnn-gcn - small-ontology --- # apo_all-MiniLM-L6-v2_attention_e256 This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks. ## Model Details - **Base Text Model**: all-MiniLM-L6-v2 - Text Embedding Dimension: 384 - **Ontology**: apo.owl - **Domain**: general - **Ontology Concepts**: 619 - **Concept Alignment**: 619/619 (100.0%) - **Fusion Method**: attention - **GNN Architecture**: GCN - **Structural Embedding Dimension**: 619 - **Output Embedding Dimension**: 64 - **Hidden Dimensions**: 128 - **Dropout**: 0.0 - **Training Date**: 2025-09-18 - **on2vec Version**: 0.1.0 - **Source Ontology Size**: 0.7 MB - **Model Size**: 96.9 MB - **Library**: on2vec + sentence-transformers ## Technical Architecture This model uses a multi-stage architecture: 1. **Text Encoding**: Input text is encoded using the base sentence-transformer model 2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships 3. **Fusion Layer**: Attention mechanism learns to weight text vs ontological information **Embedding Flow:** - Text: 384 dimensions → 128 hidden → 64 output - Structure: 619 concepts → GNN → 64 output - Fusion: attention → Final embedding ## How It Works This model combines: 1. **Text Embeddings**: Generated using the base sentence-transformer model 2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure 3. **Fusion Layer**: Combines both embedding types using the specified fusion method The ontological knowledge helps the model better understand domain-specific relationships and concepts. ## Usage ```python from sentence_transformers import SentenceTransformer # Load the model model = SentenceTransformer('apo_all-MiniLM-L6-v2_attention_e256') # Generate embeddings sentences = ['Example sentence 1', 'Example sentence 2'] embeddings = model.encode(sentences) # Compute similarity from sentence_transformers.util import cos_sim similarity = cos_sim(embeddings[0], embeddings[1]) ``` ## Fusion Method: attention Attention-based fusion that learns to focus on relevant embedding components ## Training Process This model was created using the on2vec pipeline: 1. **Ontology Processing**: The OWL ontology was converted to a graph structure 2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships 3. **Text Integration**: Base model text embeddings were combined with ontological embeddings 4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types ## Intended Use This model is particularly effective for: - General domain text processing - Tasks requiring understanding of domain-specific relationships - Semantic similarity in specialized domains - Classification tasks with domain knowledge requirements ## Limitations - Performance may vary on domains different from the training ontology - Ontological knowledge is limited to concepts present in the source OWL file - May have higher computational requirements than vanilla text models ## Citation If you use this model, please cite the on2vec framework: ```bibtex @software{on2vec, title={on2vec: Ontology Embeddings with Graph Neural Networks}, author={Your Name}, url={https://github.com/david4096/on2vec}, year={2024} } ``` --- Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
david4096/aism-all-MiniLM-L6-v2_attention_e256
david4096
2025-09-18T11:21:47Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "ontology", "on2vec", "graph-neural-networks", "base-all-MiniLM-L6-v2", "general", "general-ontology", "fusion-attention", "gnn-gcn", "medium-ontology", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-09-18T11:21:38Z
--- base_model: all-MiniLM-L6-v2 library_name: sentence-transformers license: apache-2.0 pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - ontology - on2vec - graph-neural-networks - base-all-MiniLM-L6-v2 - general - general-ontology - fusion-attention - gnn-gcn - medium-ontology --- # aism_all-MiniLM-L6-v2_attention_e256 This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks. ## Model Details - **Base Text Model**: all-MiniLM-L6-v2 - Text Embedding Dimension: 384 - **Ontology**: aism.owl - **Domain**: general - **Ontology Concepts**: 8,540 - **Concept Alignment**: 8,540/8,540 (100.0%) - **Fusion Method**: attention - **GNN Architecture**: GCN - **Structural Embedding Dimension**: 8540 - **Output Embedding Dimension**: 64 - **Hidden Dimensions**: 128 - **Dropout**: 0.0 - **Training Date**: 2025-09-18 - **on2vec Version**: 0.1.0 - **Source Ontology Size**: 28.8 MB - **Model Size**: 171.5 MB - **Library**: on2vec + sentence-transformers ## Technical Architecture This model uses a multi-stage architecture: 1. **Text Encoding**: Input text is encoded using the base sentence-transformer model 2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships 3. **Fusion Layer**: Attention mechanism learns to weight text vs ontological information **Embedding Flow:** - Text: 384 dimensions → 128 hidden → 64 output - Structure: 8540 concepts → GNN → 64 output - Fusion: attention → Final embedding ## How It Works This model combines: 1. **Text Embeddings**: Generated using the base sentence-transformer model 2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure 3. **Fusion Layer**: Combines both embedding types using the specified fusion method The ontological knowledge helps the model better understand domain-specific relationships and concepts. ## Usage ```python from sentence_transformers import SentenceTransformer # Load the model model = SentenceTransformer('aism_all-MiniLM-L6-v2_attention_e256') # Generate embeddings sentences = ['Example sentence 1', 'Example sentence 2'] embeddings = model.encode(sentences) # Compute similarity from sentence_transformers.util import cos_sim similarity = cos_sim(embeddings[0], embeddings[1]) ``` ## Fusion Method: attention Attention-based fusion that learns to focus on relevant embedding components ## Training Process This model was created using the on2vec pipeline: 1. **Ontology Processing**: The OWL ontology was converted to a graph structure 2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships 3. **Text Integration**: Base model text embeddings were combined with ontological embeddings 4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types ## Intended Use This model is particularly effective for: - General domain text processing - Tasks requiring understanding of domain-specific relationships - Semantic similarity in specialized domains - Classification tasks with domain knowledge requirements ## Limitations - Performance may vary on domains different from the training ontology - Ontological knowledge is limited to concepts present in the source OWL file - May have higher computational requirements than vanilla text models ## Citation If you use this model, please cite the on2vec framework: ```bibtex @software{on2vec, title={on2vec: Ontology Embeddings with Graph Neural Networks}, author={Your Name}, url={https://github.com/david4096/on2vec}, year={2024} } ``` --- Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
cabrel09/crop_leaf_disease_detector
cabrel09
2025-09-18T11:21:38Z
37
1
transformers
[ "transformers", "onnx", "safetensors", "vit", "image-classification", "vision transformer", "agriculture", "plant disease detection", "smart farming", "image classification", "base_model:WinKawaks/vit-tiny-patch16-224", "base_model:quantized:WinKawaks/vit-tiny-patch16-224", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2025-09-06T01:38:53Z
--- base_model: - WinKawaks/vit-tiny-patch16-224 library_name: transformers license: mit metrics: - accuracy pipeline_tag: image-classification tags: - vision transformer - agriculture - plant disease detection - smart farming - image classification --- # Fiche Modèle pour le Transformateur de Détection de Maladies en Agriculture Intelligente Ce modèle est un Vision Transformer (ViT) conçu pour identifier les maladies des plantes dans les cultures dans le cadre d'un système d'agriculture intelligente. Il a été entraîné sur un ensemble de données diversifié d'images de plantes, incluant différentes catégories de maladies affectant les cultures telles que le maïs, la pomme de terre, le riz et le blé. Le modèle vise à fournir aux agriculteurs et agronomes une détection de maladies en temps réel pour une meilleure gestion des cultures. ## Détails du Modèle ### Description du Modèle Ce modèle Vision Transformer a été affiné pour classifier diverses maladies de plantes couramment trouvées dans les environnements agricoles. Le modèle peut classifier les maladies dans les cultures telles que le maïs, la pomme de terre, le riz et le blé, identifiant des maladies comme la rouille, le mildiou, les taches foliaires, et autres. L'objectif est de permettre l'agriculture de précision en aidant les agriculteurs à détecter les maladies précocement et prendre les mesures appropriées. - **Développé par :** Cabrel KEPSEU - **Type de modèle :** Vision Transformer (ViT) - **Langues (NLP) :** N/A (Modèle de Vision par Ordinateur) - **Licence :** Apache 2.0 - **Affiné à partir du modèle :** (WinKawaks/vit-tiny-patch16-224)[https://huggingface.co/WinKawaks/vit-tiny-patch16-224] - **Entrée :** Images de cultures (format RGB) - **Sortie :** Étiquettes de classification de maladies (catégories saines ou malades) ## Maladies détectées par le modèle | Culture | Maladies Identifiées | |---------|------------------------------| | Maïs | Rouille Commune | | Maïs | Tache Grise des Feuilles | | Maïs | Sain | | Maïs | Brûlure des Feuilles | | - | Invalide | | Pomme de terre | Mildiou Précoce | | Pomme de terre | Saine | | Pomme de terre | Mildiou Tardif | | Riz | Tache Brune | | Riz | Sain | | Riz | Pyriculariose | | Blé | Rouille Brune | | Blé | Sain | | Blé | Rouille Jaune | ## Utilisations ### Utilisation Directe Ce modèle peut être utilisé directement pour classifier des images de cultures afin de détecter les maladies des plantes. Il est particulièrement utile pour l'agriculture de précision, permettant aux utilisateurs de surveiller la santé des cultures et prendre des interventions précoces basées sur la maladie détectée. ### Utilisation en Aval Ce modèle peut être affiné sur d'autres ensembles de données agricoles pour des cultures ou régions spécifiques afin d'améliorer ses performances ou être intégré dans des systèmes d'agriculture de précision plus larges qui incluent d'autres fonctionnalités comme les prédictions météorologiques et le contrôle d'irrigation. Peut être quantifié ou déployé en pleine précision sur des appareils de périphérie grâce à sa petite taille de paramètres sans compromettre la précision et l'exactitude. ### Utilisation Hors Portée Ce modèle n'est pas conçu pour des tâches de classification d'images non agricoles ou pour des environnements avec des données insuffisantes ou très bruitées. L'utilisation abusive inclut l'utilisation du modèle dans des zones avec des conditions agricoles très différentes de celles sur lesquelles il a été entraîné. ## Biais, Risques et Limitations - Le modèle peut présenter un biais envers les cultures et maladies présentes dans l'ensemble de données d'entraînement, conduisant à des performances moindres sur les maladies ou variétés de cultures non représentées. - Les faux négatifs (échec à détecter une maladie) peuvent résulter en des dommages de culture non traités, tandis que les faux positifs pourraient conduire à des interventions inutiles. ### Recommandations Les utilisateurs devraient évaluer le modèle sur leurs cultures spécifiques et conditions agricoles. Des mises à jour régulières et un réentraînement avec des données locales sont recommandés pour des performances optimales. ## Comment Commencer avec le Modèle ```python from PIL import Image, UnidentifiedImageError from transformers import ViTFeatureExtractor, ViTForImageClassification feature_extractor = ViTFeatureExtractor.from_pretrained('cabrel09/crop_leaf_diseases_detector') model = ViTForImageClassification.from_pretrained( 'cabrel09/crop_leaf_diseases_detector', ignore_mismatched_sizes=True ) image = Image.open('<chemin_image>') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits predicted_class_idx = logits.argmax(-1).item() print("Classe prédite:", model.config.id2label[predicted_class_idx]) ``` ## Détails d'Entraînement ### Données d'Entraînement Le modèle a été entraîné sur un ensemble de données contenant des images de diverses cultures avec des maladies étiquetées, incluant les catégories suivantes : - **Maïs** : Rouille Commune, Tache Grise des Feuilles, Brûlure des Feuilles, Sain - **Pomme de terre** : Mildiou Précoce, Mildiou Tardif, Saine - **Riz** : Tache Brune, Hispa, Pyriculariose, Sain - **Blé** : Rouille Brune, Rouille Jaune, Sain L'ensemble de données inclut également des images capturées sous diverses conditions d'éclairage, à partir d'environnements contrôlés et non contrôlés et d'angles différents, pour simuler des scénarios agricoles du monde réel. Nous avons utilisé des ensembles de données publiquement disponibles, ainsi que nos propres données privées. ### Procédure d'Entraînement Le modèle a été affiné en utilisant une architecture vision transformer pré-entraînée sur l'ensemble de données ImageNet. L'ensemble de données a été prétraité en redimensionnant les images et normalisant les valeurs de pixels. #### Hyperparamètres d'Entraînement - **Taille de lot :** 32 - **Taux d'apprentissage :** 2e-5 - **Époques :** 4 - **Optimiseur :** AdamW - **Précision :** fp16 ### Évaluation ![Matrice de confusion](disease_classification_metrics.png) #### Données de Test, Facteurs et Métriques Le modèle a été évalué en utilisant un ensemble de validation composé de 20% de l'ensemble de données original, avec les métriques suivantes : - **Précision :** 98% - **Précision (Precision) :** 97% - **Rappel :** 97% - **Score F1 :** 96% ## Impact Environnemental Les émissions de carbone pendant l'entraînement du modèle peuvent être estimées en utilisant le [calculateur d'impact de l'apprentissage automatique](https://mlco2.github.io/impact#compute). - **Type de matériel :** NVIDIA L40S - **Heures utilisées :** 1 heure - **Fournisseur cloud :** Lightning AI ## Spécifications Techniques ### Architecture et Objectif du Modèle Le modèle utilise une architecture Vision Transformer pour apprendre les représentations d'images et les classifier en catégories de maladies. Son mécanisme d'auto-attention lui permet de capturer l'information contextuelle globale dans les images, le rendant adapté à la détection de maladies agricoles. ### Infrastructure de Calcul #### Logiciel - Python 3.9 - PyTorch 2.4.1+cu121 - pytorch_lightning - Bibliothèque Transformers par Hugging Face ## Citation Si vous utilisez ce modèle dans vos recherches ou applications, veuillez le citer comme suit : **BibTeX :** ``` @misc{cabrel2025cropdiseases, title={Détecteur de maladies des feuilles de culture}, author={Cabrel Kepseu}, year={2025}, publisher={Hugging Face}, } ``` **APA :** Cabrel, K. (2025). Détecteur de maladies des feuilles de culture. Hugging Face. ## Contact de la Fiche Modèle Pour plus d'informations, contactez : [email protected]
mradermacher/Gemma3-Python-22k-1B-i1-GGUF
mradermacher
2025-09-18T11:21:30Z
0
0
transformers
[ "transformers", "gguf", "lora", "sft", "trl", "unsloth", "fine-tuned", "en", "dataset:Vezora/Tested-22k-Python-Alpaca", "base_model:theprint/Gemma3-Python-22k-1B", "base_model:adapter:theprint/Gemma3-Python-22k-1B", "license:mit", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-09-18T10:43:14Z
--- base_model: theprint/Gemma3-Python-22k-1B datasets: - Vezora/Tested-22k-Python-Alpaca language: en library_name: transformers license: mit mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - lora - sft - transformers - trl - unsloth - fine-tuned --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> <!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> weighted/imatrix quants of https://huggingface.co/theprint/Gemma3-Python-22k-1B <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Gemma3-Python-22k-1B-i1-GGUF).*** static quants are available at https://huggingface.co/mradermacher/Gemma3-Python-22k-1B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Gemma3-Python-22k-1B-i1-GGUF/resolve/main/Gemma3-Python-22k-1B.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) | | [GGUF](https://huggingface.co/mradermacher/Gemma3-Python-22k-1B-i1-GGUF/resolve/main/Gemma3-Python-22k-1B.i1-IQ1_S.gguf) | i1-IQ1_S | 0.9 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Gemma3-Python-22k-1B-i1-GGUF/resolve/main/Gemma3-Python-22k-1B.i1-IQ1_M.gguf) | i1-IQ1_M | 0.9 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Gemma3-Python-22k-1B-i1-GGUF/resolve/main/Gemma3-Python-22k-1B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/Gemma3-Python-22k-1B-i1-GGUF/resolve/main/Gemma3-Python-22k-1B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/Gemma3-Python-22k-1B-i1-GGUF/resolve/main/Gemma3-Python-22k-1B.i1-IQ2_S.gguf) | i1-IQ2_S | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/Gemma3-Python-22k-1B-i1-GGUF/resolve/main/Gemma3-Python-22k-1B.i1-IQ2_M.gguf) | i1-IQ2_M | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/Gemma3-Python-22k-1B-i1-GGUF/resolve/main/Gemma3-Python-22k-1B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.9 | very low quality | | [GGUF](https://huggingface.co/mradermacher/Gemma3-Python-22k-1B-i1-GGUF/resolve/main/Gemma3-Python-22k-1B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Gemma3-Python-22k-1B-i1-GGUF/resolve/main/Gemma3-Python-22k-1B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.0 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Gemma3-Python-22k-1B-i1-GGUF/resolve/main/Gemma3-Python-22k-1B.i1-IQ3_S.gguf) | i1-IQ3_S | 1.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Gemma3-Python-22k-1B-i1-GGUF/resolve/main/Gemma3-Python-22k-1B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/Gemma3-Python-22k-1B-i1-GGUF/resolve/main/Gemma3-Python-22k-1B.i1-Q2_K.gguf) | i1-Q2_K | 1.0 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Gemma3-Python-22k-1B-i1-GGUF/resolve/main/Gemma3-Python-22k-1B.i1-IQ3_M.gguf) | i1-IQ3_M | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/Gemma3-Python-22k-1B-i1-GGUF/resolve/main/Gemma3-Python-22k-1B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/Gemma3-Python-22k-1B-i1-GGUF/resolve/main/Gemma3-Python-22k-1B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 1.0 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Gemma3-Python-22k-1B-i1-GGUF/resolve/main/Gemma3-Python-22k-1B.i1-Q4_0.gguf) | i1-Q4_0 | 1.0 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Gemma3-Python-22k-1B-i1-GGUF/resolve/main/Gemma3-Python-22k-1B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.0 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Gemma3-Python-22k-1B-i1-GGUF/resolve/main/Gemma3-Python-22k-1B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.0 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Gemma3-Python-22k-1B-i1-GGUF/resolve/main/Gemma3-Python-22k-1B.i1-Q4_1.gguf) | i1-Q4_1 | 1.1 | | | [GGUF](https://huggingface.co/mradermacher/Gemma3-Python-22k-1B-i1-GGUF/resolve/main/Gemma3-Python-22k-1B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.1 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Gemma3-Python-22k-1B-i1-GGUF/resolve/main/Gemma3-Python-22k-1B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 1.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Gemma3-Python-22k-1B-i1-GGUF/resolve/main/Gemma3-Python-22k-1B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 1.2 | | | [GGUF](https://huggingface.co/mradermacher/Gemma3-Python-22k-1B-i1-GGUF/resolve/main/Gemma3-Python-22k-1B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 1.2 | | | [GGUF](https://huggingface.co/mradermacher/Gemma3-Python-22k-1B-i1-GGUF/resolve/main/Gemma3-Python-22k-1B.i1-Q6_K.gguf) | i1-Q6_K | 1.4 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
david4096/amphx-all-MiniLM-L6-v2_attention_e256
david4096
2025-09-18T11:21:25Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "ontology", "on2vec", "graph-neural-networks", "base-all-MiniLM-L6-v2", "general", "general-ontology", "fusion-attention", "gnn-gcn", "small-ontology", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-09-18T11:21:21Z
--- base_model: all-MiniLM-L6-v2 library_name: sentence-transformers license: apache-2.0 pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - ontology - on2vec - graph-neural-networks - base-all-MiniLM-L6-v2 - general - general-ontology - fusion-attention - gnn-gcn - small-ontology --- # amphx_all-MiniLM-L6-v2_attention_e256 This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks. ## Model Details - **Base Text Model**: all-MiniLM-L6-v2 - Text Embedding Dimension: 384 - **Ontology**: amphx.owl - **Domain**: general - **Ontology Concepts**: 403 - **Concept Alignment**: 403/403 (100.0%) - **Fusion Method**: attention - **GNN Architecture**: GCN - **Structural Embedding Dimension**: 403 - **Output Embedding Dimension**: 64 - **Hidden Dimensions**: 128 - **Dropout**: 0.0 - **Training Date**: 2025-09-18 - **on2vec Version**: 0.1.0 - **Source Ontology Size**: 0.6 MB - **Model Size**: 94.8 MB - **Library**: on2vec + sentence-transformers ## Technical Architecture This model uses a multi-stage architecture: 1. **Text Encoding**: Input text is encoded using the base sentence-transformer model 2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships 3. **Fusion Layer**: Attention mechanism learns to weight text vs ontological information **Embedding Flow:** - Text: 384 dimensions → 128 hidden → 64 output - Structure: 403 concepts → GNN → 64 output - Fusion: attention → Final embedding ## How It Works This model combines: 1. **Text Embeddings**: Generated using the base sentence-transformer model 2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure 3. **Fusion Layer**: Combines both embedding types using the specified fusion method The ontological knowledge helps the model better understand domain-specific relationships and concepts. ## Usage ```python from sentence_transformers import SentenceTransformer # Load the model model = SentenceTransformer('amphx_all-MiniLM-L6-v2_attention_e256') # Generate embeddings sentences = ['Example sentence 1', 'Example sentence 2'] embeddings = model.encode(sentences) # Compute similarity from sentence_transformers.util import cos_sim similarity = cos_sim(embeddings[0], embeddings[1]) ``` ## Fusion Method: attention Attention-based fusion that learns to focus on relevant embedding components ## Training Process This model was created using the on2vec pipeline: 1. **Ontology Processing**: The OWL ontology was converted to a graph structure 2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships 3. **Text Integration**: Base model text embeddings were combined with ontological embeddings 4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types ## Intended Use This model is particularly effective for: - General domain text processing - Tasks requiring understanding of domain-specific relationships - Semantic similarity in specialized domains - Classification tasks with domain knowledge requirements ## Limitations - Performance may vary on domains different from the training ontology - Ontological knowledge is limited to concepts present in the source OWL file - May have higher computational requirements than vanilla text models ## Citation If you use this model, please cite the on2vec framework: ```bibtex @software{on2vec, title={on2vec: Ontology Embeddings with Graph Neural Networks}, author={Your Name}, url={https://github.com/david4096/on2vec}, year={2024} } ``` --- Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
onnx-community/wav2vec2-xls-r-300m-ONNX
onnx-community
2025-09-18T11:21:08Z
0
0
transformers.js
[ "transformers.js", "onnx", "wav2vec2", "pretraining", "base_model:facebook/wav2vec2-xls-r-300m", "base_model:quantized:facebook/wav2vec2-xls-r-300m", "region:us" ]
null
2025-09-18T11:20:51Z
--- library_name: transformers.js base_model: - facebook/wav2vec2-xls-r-300m --- # wav2vec2-xls-r-300m (ONNX) This is an ONNX version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m). It was automatically converted and uploaded using [this space](https://huggingface.co/spaces/onnx-community/convert-to-onnx).
david4096/agro-all-MiniLM-L6-v2_attention_e256
david4096
2025-09-18T11:21:03Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "ontology", "on2vec", "graph-neural-networks", "base-all-MiniLM-L6-v2", "general", "general-ontology", "fusion-attention", "gnn-gcn", "medium-ontology", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-09-18T11:20:58Z
--- base_model: all-MiniLM-L6-v2 library_name: sentence-transformers license: apache-2.0 pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - ontology - on2vec - graph-neural-networks - base-all-MiniLM-L6-v2 - general - general-ontology - fusion-attention - gnn-gcn - medium-ontology --- # agro_all-MiniLM-L6-v2_attention_e256 This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks. ## Model Details - **Base Text Model**: all-MiniLM-L6-v2 - Text Embedding Dimension: 384 - **Ontology**: agro.owl - **Domain**: general - **Ontology Concepts**: 4,162 - **Concept Alignment**: 4,162/4,162 (100.0%) - **Fusion Method**: attention - **GNN Architecture**: GCN - **Structural Embedding Dimension**: 4162 - **Output Embedding Dimension**: 64 - **Hidden Dimensions**: 128 - **Dropout**: 0.0 - **Training Date**: 2025-09-18 - **on2vec Version**: 0.1.0 - **Source Ontology Size**: 7.2 MB - **Model Size**: 130.2 MB - **Library**: on2vec + sentence-transformers ## Technical Architecture This model uses a multi-stage architecture: 1. **Text Encoding**: Input text is encoded using the base sentence-transformer model 2. **Ontological Embedding**: Pre-trained GNN embeddings capture structural relationships 3. **Fusion Layer**: Attention mechanism learns to weight text vs ontological information **Embedding Flow:** - Text: 384 dimensions → 128 hidden → 64 output - Structure: 4162 concepts → GNN → 64 output - Fusion: attention → Final embedding ## How It Works This model combines: 1. **Text Embeddings**: Generated using the base sentence-transformer model 2. **Ontological Embeddings**: Created by training Graph Neural Networks on OWL ontology structure 3. **Fusion Layer**: Combines both embedding types using the specified fusion method The ontological knowledge helps the model better understand domain-specific relationships and concepts. ## Usage ```python from sentence_transformers import SentenceTransformer # Load the model model = SentenceTransformer('agro_all-MiniLM-L6-v2_attention_e256') # Generate embeddings sentences = ['Example sentence 1', 'Example sentence 2'] embeddings = model.encode(sentences) # Compute similarity from sentence_transformers.util import cos_sim similarity = cos_sim(embeddings[0], embeddings[1]) ``` ## Fusion Method: attention Attention-based fusion that learns to focus on relevant embedding components ## Training Process This model was created using the on2vec pipeline: 1. **Ontology Processing**: The OWL ontology was converted to a graph structure 2. **GNN Training**: Graph Neural Networks were trained to learn ontological relationships 3. **Text Integration**: Base model text embeddings were combined with ontological embeddings 4. **Fusion Training**: The fusion layer was trained to optimally combine both embedding types ## Intended Use This model is particularly effective for: - General domain text processing - Tasks requiring understanding of domain-specific relationships - Semantic similarity in specialized domains - Classification tasks with domain knowledge requirements ## Limitations - Performance may vary on domains different from the training ontology - Ontological knowledge is limited to concepts present in the source OWL file - May have higher computational requirements than vanilla text models ## Citation If you use this model, please cite the on2vec framework: ```bibtex @software{on2vec, title={on2vec: Ontology Embeddings with Graph Neural Networks}, author={Your Name}, url={https://github.com/david4096/on2vec}, year={2024} } ``` --- Created with [on2vec](https://github.com/david4096/on2vec) 🧬→🤖
kingkim/Dooroo_2508
kingkim
2025-09-18T11:16:17Z
0
0
null
[ "safetensors", "qwen3", "dataset:kingkim/yeosu_tour", "dataset:kingkim/yeosu_island", "base_model:Qwen/Qwen3-4B-Instruct-2507", "base_model:finetune:Qwen/Qwen3-4B-Instruct-2507", "license:apache-2.0", "region:us" ]
null
2025-09-18T05:40:22Z
--- license: apache-2.0 datasets: - kingkim/yeosu_tour - kingkim/yeosu_island base_model: - Qwen/Qwen3-4B-Instruct-2507 - unsloth/Qwen3-4B-Instruct-2507 --- # Dooroo\_2508: 여수 관광 특화 챗봇 모델 이 모델은 [unsloth/Qwen3-4B-Instruct-2507](https://huggingface.co/unsloth/Qwen3-4B-Instruct-2507) 모델을 기반으로, 대한민국 여수시의 관광 정보와 섬 정보에 대해 특화된 지식을 갖도록 파인튜닝되었습니다. **Unsloth 라이브러리**를 사용하여 LoRA(Low-Rank Adaptation) 기법으로 효율적인 학습을 진행했으며, 여수 여행에 관한 질문에 자연스럽고 정확한 답변을 생성하는 것을 목표로 합니다. ## 🛠️ 학습 과정 (Training Procedure) ### 1\. 기반 모델 (Base Model) * **Model:** `unsloth/Qwen3-4B-Instruct-2507` * **Library:** `Unsloth`를 사용하여 메모리 사용량을 최적화하고 학습 속도를 크게 향상시켰습니다. ### 2\. 데이터셋 (Dataset) 학습에는 아래 두 가지 데이터셋을 병합하여 사용했습니다. 각 데이터셋의 `train`과 `test` 스플릿을 합친 후, `train` 데이터셋은 무작위로 섞어 모델이 특정 주제에 편향되지 않도록 했습니다. * [kingkim/yeosu\_tour](https://www.google.com/search?q=https://huggingface.co/datasets/kingkim/yeosu_tour): 여수 관광 명소 관련 데이터 * [kingkim/yeosu\_island](https://www.google.com/search?q=https://huggingface.co/datasets/kingkim/yeosu_island): 여수 섬 관련 데이터 ### 3\. 하이퍼파라미터 (Hyperparameters) 모델 학습에 사용된 주요 하이퍼파라미터는 다음과 같습니다. #### LoRA 설정 | 파라미터 | 값 | 설명 | | :--- | :--- | :--- | | `r` | `16` | LoRA 행렬의 랭크 (rank) | | `lora_alpha` | `32` | LoRA 스케일링 인자 | | `lora_dropout` | `0.05` | LoRA 레이어의 드롭아웃 비율 | | `target_modules` | `q_proj`, `k_proj`, `v_proj`, `o_proj`, `gate_proj`, `up_proj`, `down_proj` | LoRA를 적용할 대상 모듈 | #### Training Arguments | 파라미터 | 값 | 설명 | | :--- | :--- | :--- | | `num_train_epochs` | `15` | 총 학습 에포크 수 | | `learning_rate` | `4e-6` | 학습률 | | `per_device_train_batch_size` | `32` | 디바이스당 학습 배치 크기 | | `gradient_accumulation_steps` | `2` | 그래디언트 누적 스텝 | | `optimizer` | `adamw_8bit` | 8비트 AdamW 옵티마이저 | | `lr_scheduler_type` | `linear` | 선형 학습률 스케줄러 | ## 📊 평가 결과 (Evaluation Results) `eval_dataset`에 대한 최종 평가 결과입니다. **Loss**는 모델이 예측한 값과 실제 값의 차이를 나타내며, 낮을수록 모델의 성능이 좋음을 의미합니다. | 메트릭 (Metric) | 값 (Value) | | :--- | :--- | | **`eval_loss`** | **`1.5407`** | | `eval_runtime` | `30.8676` 초 | | `eval_samples_per_second` | `68.551` | | `eval_steps_per_second` | `8.585` | | `epoch` | `15.0` | --- license: apache-2.0 tags: - unsloth - trl - sft
akash07k/AI_vs_Human_distilBert
akash07k
2025-09-18T11:13:53Z
0
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-09-18T11:03:23Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
schooncestiaa/blockassist-bc-scruffy_webbed_dragonfly_1758193913
schooncestiaa
2025-09-18T11:13:01Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "scruffy webbed dragonfly", "arxiv:2504.07091", "region:us" ]
null
2025-09-18T11:12:53Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - scruffy webbed dragonfly --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Woody928/alfworld-wm-70k-both
Woody928
2025-09-18T11:11:57Z
0
0
peft
[ "peft", "safetensors", "base_model:adapter:Qwen/Qwen2.5-VL-7B-Instruct", "lora", "sft", "transformers", "trl", "text-generation", "conversational", "arxiv:1910.09700", "base_model:Qwen/Qwen2.5-VL-7B-Instruct", "region:us" ]
text-generation
2025-09-18T09:51:58Z
--- base_model: Qwen/Qwen2.5-VL-7B-Instruct library_name: peft pipeline_tag: text-generation tags: - base_model:adapter:Qwen/Qwen2.5-VL-7B-Instruct - lora - sft - transformers - trl --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.16.0
mradermacher/Archer2.0-Code-1.5B-Preview-GGUF
mradermacher
2025-09-18T11:11:30Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:Fate-Zero/Archer2.0-Code-1.5B-Preview", "base_model:quantized:Fate-Zero/Archer2.0-Code-1.5B-Preview", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-09-18T11:00:30Z
--- base_model: Fate-Zero/Archer2.0-Code-1.5B-Preview language: - en library_name: transformers license: apache-2.0 mradermacher: readme_rev: 1 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/Fate-Zero/Archer2.0-Code-1.5B-Preview <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Archer2.0-Code-1.5B-Preview-GGUF).*** weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Archer2.0-Code-1.5B-Preview-GGUF/resolve/main/Archer2.0-Code-1.5B-Preview.Q2_K.gguf) | Q2_K | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/Archer2.0-Code-1.5B-Preview-GGUF/resolve/main/Archer2.0-Code-1.5B-Preview.Q3_K_S.gguf) | Q3_K_S | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/Archer2.0-Code-1.5B-Preview-GGUF/resolve/main/Archer2.0-Code-1.5B-Preview.Q3_K_M.gguf) | Q3_K_M | 1.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Archer2.0-Code-1.5B-Preview-GGUF/resolve/main/Archer2.0-Code-1.5B-Preview.Q3_K_L.gguf) | Q3_K_L | 1.1 | | | [GGUF](https://huggingface.co/mradermacher/Archer2.0-Code-1.5B-Preview-GGUF/resolve/main/Archer2.0-Code-1.5B-Preview.IQ4_XS.gguf) | IQ4_XS | 1.1 | | | [GGUF](https://huggingface.co/mradermacher/Archer2.0-Code-1.5B-Preview-GGUF/resolve/main/Archer2.0-Code-1.5B-Preview.Q4_K_S.gguf) | Q4_K_S | 1.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Archer2.0-Code-1.5B-Preview-GGUF/resolve/main/Archer2.0-Code-1.5B-Preview.Q4_K_M.gguf) | Q4_K_M | 1.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Archer2.0-Code-1.5B-Preview-GGUF/resolve/main/Archer2.0-Code-1.5B-Preview.Q5_K_S.gguf) | Q5_K_S | 1.4 | | | [GGUF](https://huggingface.co/mradermacher/Archer2.0-Code-1.5B-Preview-GGUF/resolve/main/Archer2.0-Code-1.5B-Preview.Q5_K_M.gguf) | Q5_K_M | 1.4 | | | [GGUF](https://huggingface.co/mradermacher/Archer2.0-Code-1.5B-Preview-GGUF/resolve/main/Archer2.0-Code-1.5B-Preview.Q6_K.gguf) | Q6_K | 1.6 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Archer2.0-Code-1.5B-Preview-GGUF/resolve/main/Archer2.0-Code-1.5B-Preview.Q8_0.gguf) | Q8_0 | 2.0 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Archer2.0-Code-1.5B-Preview-GGUF/resolve/main/Archer2.0-Code-1.5B-Preview.f16.gguf) | f16 | 3.7 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
lemonhat/Llama-3.2-3B-Instruct-C2_re_100k_tag5_cleaned_hermes_toolv6_dethink_replacedv1
lemonhat
2025-09-18T11:09:02Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "llama-factory", "full", "generated_from_trainer", "conversational", "base_model:meta-llama/Llama-3.2-3B-Instruct", "base_model:finetune:meta-llama/Llama-3.2-3B-Instruct", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-18T11:07:41Z
--- library_name: transformers license: other base_model: meta-llama/Llama-3.2-3B-Instruct tags: - llama-factory - full - generated_from_trainer model-index: - name: C2_re_100k_tag5_cleaned_hermes_toolv6_dethink_replacedv1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # C2_re_100k_tag5_cleaned_hermes_toolv6_dethink_replacedv1 This model is a fine-tuned version of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) on the C2_re_100k_tag5_cleaned_hermes_toolv6_dethink_replacedv1 dataset. It achieves the following results on the evaluation set: - Loss: 0.2841 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - total_eval_batch_size: 4 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.349 | 0.0954 | 300 | 0.3415 | | 0.2836 | 0.1908 | 600 | 0.3218 | | 0.2934 | 0.2862 | 900 | 0.3117 | | 0.2939 | 0.3816 | 1200 | 0.3045 | | 0.3349 | 0.4770 | 1500 | 0.2988 | | 0.2903 | 0.5724 | 1800 | 0.2937 | | 0.3044 | 0.6678 | 2100 | 0.2894 | | 0.2956 | 0.7632 | 2400 | 0.2862 | | 0.2993 | 0.8586 | 2700 | 0.2846 | | 0.2748 | 0.9540 | 3000 | 0.2841 | ### Framework versions - Transformers 4.52.4 - Pytorch 2.8.0+cu128 - Datasets 3.6.0 - Tokenizers 0.21.1
joanna302/Qwen3-8B-Base_en_alpaca_SFT_8e-05_fr_pt_zh_ar_without_en_DPO_5e-07_beta_0.3_model_ref
joanna302
2025-09-18T11:07:49Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "dpo", "unsloth", "trl", "arxiv:2305.18290", "base_model:joanna302/Qwen3-8B-Base_en_alpaca_SFT_8e-05", "base_model:finetune:joanna302/Qwen3-8B-Base_en_alpaca_SFT_8e-05", "endpoints_compatible", "region:us" ]
null
2025-09-18T10:24:17Z
--- base_model: joanna302/Qwen3-8B-Base_en_alpaca_SFT_8e-05 library_name: transformers model_name: Qwen3-8B-Base_en_alpaca_SFT_8e-05_fr_pt_zh_ar_without_en_DPO_5e-07_beta_0.3_model_ref tags: - generated_from_trainer - dpo - unsloth - trl licence: license --- # Model Card for Qwen3-8B-Base_en_alpaca_SFT_8e-05_fr_pt_zh_ar_without_en_DPO_5e-07_beta_0.3_model_ref This model is a fine-tuned version of [joanna302/Qwen3-8B-Base_en_alpaca_SFT_8e-05](https://huggingface.co/joanna302/Qwen3-8B-Base_en_alpaca_SFT_8e-05). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="joanna302/Qwen3-8B-Base_en_alpaca_SFT_8e-05_fr_pt_zh_ar_without_en_DPO_5e-07_beta_0.3_model_ref", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/prism-eval/Qwen3-8B-Base_en_alpaca_SFT_8e-05_fr_pt_zh_ar_without_en_DPO_5e-07_beta_0.3_model_ref/runs/xo7ec2ma) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.22.2 - Transformers: 4.55.4 - Pytorch: 2.8.0 - Datasets: 3.6.0 - Tokenizers: 0.21.4 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
cubbk/dinov2-base-finetuned-clothes-big
cubbk
2025-09-18T11:06:31Z
40
0
transformers
[ "transformers", "safetensors", "dinov2", "image-classification", "generated_from_trainer", "dataset:webdataset", "base_model:facebook/dinov2-base", "base_model:finetune:facebook/dinov2-base", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2025-08-22T17:16:50Z
--- library_name: transformers license: apache-2.0 base_model: facebook/dinov2-base tags: - generated_from_trainer datasets: - webdataset metrics: - accuracy model-index: - name: dinov2-base-finetuned-clothes-big results: - task: name: Image Classification type: image-classification dataset: name: webdataset type: webdataset config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.959008487654321 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # dinov2-base-finetuned-clothes-big This model is a fine-tuned version of [facebook/dinov2-base](https://huggingface.co/facebook/dinov2-base) on [~40gb of h&m clothing images](https://www.kaggle.com/competitions/h-and-m-personalized-fashion-recommendations). It is meant to clastify garments into their own category like "upper garment", "lower_garment", "underpants" etc It achieves the following results on the evaluation set: - Loss: 0.1260 - Accuracy: 0.9590 The results are even better because the dataset has sometimes wrong labels or several labels work well on the same piece of clothing. ## How to use ```py from transformers import pipeline pipe = pipeline("image-classification", model="cubbk/dinov2-base-finetuned-clothes-big") result = pipe("https://static.zara.net/assets/public/9cee/ace9/4ac84b608e84/df271252ff15/04391788800-e1/04391788800-e1.jpg?ts=1758018702215&w=1500") result[0] ``` ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 144 - eval_batch_size: 144 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3022 | 1.0 | 583 | 0.2322 | 0.9210 | | 0.2158 | 2.0 | 1166 | 0.1539 | 0.9501 | | 0.1767 | 3.0 | 1749 | 0.1260 | 0.9590 | ### Framework versions - Transformers 4.55.3 - Pytorch 2.8.0+cu128 - Datasets 4.0.0 - Tokenizers 0.21.4
aamijar/ReplaceME-Llama-3.1-8B-Instruct-lora-r8-mrpc
aamijar
2025-09-18T11:05:43Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-18T11:05:41Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
TAUR-dev/M-rl_1e_v2__pv_v3-rl
TAUR-dev
2025-09-18T11:05:07Z
2
0
null
[ "safetensors", "qwen2", "en", "license:mit", "region:us" ]
null
2025-09-18T01:30:48Z
--- language: en license: mit --- # M-rl_1e_v2__pv_v3-rl ## Model Details - **Training Method**: VeRL Reinforcement Learning (RL) - **Stage Name**: rl - **Experiment**: rl_1e_v2__pv_v3 - **RL Framework**: VeRL (Versatile Reinforcement Learning) ## Training Configuration ## Experiment Tracking 🔗 **View complete experiment details**: https://huggingface.co/datasets/TAUR-dev/D-ExpTracker__rl_1e_v2__pv_v3__v1 ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("TAUR-dev/M-rl_1e_v2__pv_v3-rl") model = AutoModelForCausalLM.from_pretrained("TAUR-dev/M-rl_1e_v2__pv_v3-rl") ```
schooncestiaa/blockassist-bc-scruffy_webbed_dragonfly_1758193280
schooncestiaa
2025-09-18T11:02:40Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "scruffy webbed dragonfly", "arxiv:2504.07091", "region:us" ]
null
2025-09-18T11:02:20Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - scruffy webbed dragonfly --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Jr12lm12/llama-3.1-8b-climate-expert
Jr12lm12
2025-09-18T11:01:05Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-09-18T11:00:42Z
--- base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Jr12lm12 - **License:** apache-2.0 - **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
mradermacher/EvaGPT-German-Mis-X-LlamaTok-DE-0-44B-GGUF
mradermacher
2025-09-18T11:00:36Z
0
0
transformers
[ "transformers", "gguf", "gpt", "llm", "large language model", "h2o-llmstudio", "de", "base_model:MTSmash/EvaGPT-German-Mis-X-LlamaTok-DE-0-44B", "base_model:quantized:MTSmash/EvaGPT-German-Mis-X-LlamaTok-DE-0-44B", "endpoints_compatible", "region:us", "conversational" ]
null
2025-09-18T10:56:38Z
--- base_model: MTSmash/EvaGPT-German-Mis-X-LlamaTok-DE-0-44B language: - de library_name: transformers mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - gpt - llm - large language model - h2o-llmstudio --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/MTSmash/EvaGPT-German-Mis-X-LlamaTok-DE-0-44B <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#EvaGPT-German-Mis-X-LlamaTok-DE-0-44B-GGUF).*** weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/EvaGPT-German-Mis-X-LlamaTok-DE-0-44B-GGUF/resolve/main/EvaGPT-German-Mis-X-LlamaTok-DE-0-44B.Q3_K_S.gguf) | Q3_K_S | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/EvaGPT-German-Mis-X-LlamaTok-DE-0-44B-GGUF/resolve/main/EvaGPT-German-Mis-X-LlamaTok-DE-0-44B.Q2_K.gguf) | Q2_K | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/EvaGPT-German-Mis-X-LlamaTok-DE-0-44B-GGUF/resolve/main/EvaGPT-German-Mis-X-LlamaTok-DE-0-44B.IQ4_XS.gguf) | IQ4_XS | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/EvaGPT-German-Mis-X-LlamaTok-DE-0-44B-GGUF/resolve/main/EvaGPT-German-Mis-X-LlamaTok-DE-0-44B.Q3_K_M.gguf) | Q3_K_M | 0.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/EvaGPT-German-Mis-X-LlamaTok-DE-0-44B-GGUF/resolve/main/EvaGPT-German-Mis-X-LlamaTok-DE-0-44B.Q3_K_L.gguf) | Q3_K_L | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/EvaGPT-German-Mis-X-LlamaTok-DE-0-44B-GGUF/resolve/main/EvaGPT-German-Mis-X-LlamaTok-DE-0-44B.Q4_K_S.gguf) | Q4_K_S | 0.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/EvaGPT-German-Mis-X-LlamaTok-DE-0-44B-GGUF/resolve/main/EvaGPT-German-Mis-X-LlamaTok-DE-0-44B.Q4_K_M.gguf) | Q4_K_M | 0.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/EvaGPT-German-Mis-X-LlamaTok-DE-0-44B-GGUF/resolve/main/EvaGPT-German-Mis-X-LlamaTok-DE-0-44B.Q5_K_S.gguf) | Q5_K_S | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/EvaGPT-German-Mis-X-LlamaTok-DE-0-44B-GGUF/resolve/main/EvaGPT-German-Mis-X-LlamaTok-DE-0-44B.Q5_K_M.gguf) | Q5_K_M | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/EvaGPT-German-Mis-X-LlamaTok-DE-0-44B-GGUF/resolve/main/EvaGPT-German-Mis-X-LlamaTok-DE-0-44B.Q6_K.gguf) | Q6_K | 0.6 | very good quality | | [GGUF](https://huggingface.co/mradermacher/EvaGPT-German-Mis-X-LlamaTok-DE-0-44B-GGUF/resolve/main/EvaGPT-German-Mis-X-LlamaTok-DE-0-44B.Q8_0.gguf) | Q8_0 | 0.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/EvaGPT-German-Mis-X-LlamaTok-DE-0-44B-GGUF/resolve/main/EvaGPT-German-Mis-X-LlamaTok-DE-0-44B.f16.gguf) | f16 | 1.0 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Llama-3.1-Argunaut-1-8B-HIRPO-GGUF
mradermacher
2025-09-18T11:00:36Z
0
0
transformers
[ "transformers", "gguf", "logic", "argumentation", "critical-thinking", "argument-mapping", "generated_from_trainer", "trl", "rlvr", "hirpo", "en", "dataset:DebateLabKIT/arguments-and-debates", "base_model:DebateLabKIT/Llama-3.1-Argunaut-1-8B-HIRPO", "base_model:quantized:DebateLabKIT/Llama-3.1-Argunaut-1-8B-HIRPO", "endpoints_compatible", "region:us", "conversational" ]
null
2025-09-18T09:15:52Z
--- base_model: DebateLabKIT/Llama-3.1-Argunaut-1-8B-HIRPO datasets: - DebateLabKIT/arguments-and-debates language: - en library_name: transformers model_name: Llama-3.1-Argunaut-1-8B-HIRPO mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - logic - argumentation - critical-thinking - argument-mapping - generated_from_trainer - trl - rlvr - hirpo --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/DebateLabKIT/Llama-3.1-Argunaut-1-8B-HIRPO <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Llama-3.1-Argunaut-1-8B-HIRPO-GGUF).*** weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama-3.1-Argunaut-1-8B-HIRPO-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Argunaut-1-8B-HIRPO-GGUF/resolve/main/Llama-3.1-Argunaut-1-8B-HIRPO.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Argunaut-1-8B-HIRPO-GGUF/resolve/main/Llama-3.1-Argunaut-1-8B-HIRPO.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Argunaut-1-8B-HIRPO-GGUF/resolve/main/Llama-3.1-Argunaut-1-8B-HIRPO.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Argunaut-1-8B-HIRPO-GGUF/resolve/main/Llama-3.1-Argunaut-1-8B-HIRPO.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Argunaut-1-8B-HIRPO-GGUF/resolve/main/Llama-3.1-Argunaut-1-8B-HIRPO.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Argunaut-1-8B-HIRPO-GGUF/resolve/main/Llama-3.1-Argunaut-1-8B-HIRPO.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Argunaut-1-8B-HIRPO-GGUF/resolve/main/Llama-3.1-Argunaut-1-8B-HIRPO.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Argunaut-1-8B-HIRPO-GGUF/resolve/main/Llama-3.1-Argunaut-1-8B-HIRPO.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Argunaut-1-8B-HIRPO-GGUF/resolve/main/Llama-3.1-Argunaut-1-8B-HIRPO.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Argunaut-1-8B-HIRPO-GGUF/resolve/main/Llama-3.1-Argunaut-1-8B-HIRPO.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Argunaut-1-8B-HIRPO-GGUF/resolve/main/Llama-3.1-Argunaut-1-8B-HIRPO.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Argunaut-1-8B-HIRPO-GGUF/resolve/main/Llama-3.1-Argunaut-1-8B-HIRPO.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Flo0620/Qwen2_5_7B_r64_a128_d0_2_756TrainSize2
Flo0620
2025-09-18T11:00:36Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:Qwen/Qwen2.5-VL-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-09-18T10:42:56Z
--- base_model: Qwen/Qwen2.5-VL-7B-Instruct library_name: transformers model_name: Qwen2_5_7B_r64_a128_d0_2_756TrainSize2 tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for Qwen2_5_7B_r64_a128_d0_2_756TrainSize2 This model is a fine-tuned version of [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Flo0620/Qwen2_5_7B_r64_a128_d0_2_756TrainSize2", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.15.2 - Transformers: 4.52.0.dev0 - Pytorch: 2.6.0+cu124 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
krizk093/test
krizk093
2025-09-18T10:58:39Z
0
0
diffusers
[ "diffusers", "flux", "text-to-image", "lora", "fal", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-09-18T10:58:34Z
--- tags: - flux - text-to-image - lora - diffusers - fal base_model: black-forest-labs/FLUX.1-dev instance_prompt: Hannah license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- # test <Gallery /> ## Model description hannah ## Trigger words You should use `Hannah` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/krizk093/test/tree/main) them in the Files & versions tab. ## Training at fal.ai Training was done using [fal.ai/models/fal-ai/flux-lora-fast-training](https://fal.ai/models/fal-ai/flux-lora-fast-training).
vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT-1024-PKU-SafeRLHF-OMD-0914190921-epoch-9
vectorzhou
2025-09-18T10:58:21Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "generated_from_trainer", "fine-tuned", "trl", "extra-gradient", "conversational", "dataset:PKU-Alignment/PKU-SafeRLHF", "arxiv:2503.08942", "base_model:vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT-1024", "base_model:finetune:vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT-1024", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-18T10:57:44Z
--- base_model: vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT-1024 datasets: PKU-Alignment/PKU-SafeRLHF library_name: transformers model_name: gemma-2-2b-it-alpaca-cleaned-SFT-1024-PKU-SafeRLHF-OMD tags: - generated_from_trainer - text-generation - fine-tuned - trl - extra-gradient licence: license --- # Model Card for gemma-2-2b-it-alpaca-cleaned-SFT-1024-PKU-SafeRLHF-OMD This model is a fine-tuned version of [vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT-1024](https://huggingface.co/vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT-1024) on the [PKU-Alignment/PKU-SafeRLHF](https://huggingface.co/datasets/PKU-Alignment/PKU-SafeRLHF) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT-1024-PKU-SafeRLHF-OMD-0914190921-epoch-9", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/zrl_csl_nlhf/nlhf/runs/6tm2oyul) This model was trained with Extragradient, a method introduced in [Extragradient Preference Optimization (EGPO): Beyond Last-Iterate Convergence for Nash Learning from Human Feedback](https://huggingface.co/papers/2503.08942). ### Framework versions - TRL: 0.23.0 - Transformers: 4.56.1 - Pytorch: 2.8.0+cu128 - Datasets: 4.0.0 - Tokenizers: 0.22.0 ## Citations Cite Extragradient as: ```bibtex @misc{zhou2025extragradientpreferenceoptimizationegpo, title={Extragradient Preference Optimization (EGPO): Beyond Last-Iterate Convergence for Nash Learning from Human Feedback}, author={Runlong Zhou and Maryam Fazel and Simon S. Du}, year={2025}, eprint={2503.08942}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2503.08942}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
aamijar/ReplaceME-Llama-3.1-8B-Instruct-lora-r8-mrpc-epochs3
aamijar
2025-09-18T10:57:29Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-18T10:57:27Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]